Jan 30 00:10:09 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 00:10:10 crc kubenswrapper[5103]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:10 crc kubenswrapper[5103]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 00:10:10 crc kubenswrapper[5103]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:10 crc kubenswrapper[5103]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:10 crc kubenswrapper[5103]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 00:10:10 crc kubenswrapper[5103]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.528799 5103 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.533260 5103 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.533284 5103 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.533294 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.533302 5103 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.533313 5103 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534810 5103 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534838 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534849 5103 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534858 5103 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534868 5103 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534877 5103 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534885 5103 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534892 5103 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534900 5103 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534907 5103 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534914 5103 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534921 5103 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534931 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534938 5103 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534945 5103 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534952 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534959 5103 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534966 5103 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534976 5103 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534986 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.534994 5103 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535001 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535009 5103 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535016 5103 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535024 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535033 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535042 5103 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535078 5103 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535087 5103 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535095 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535103 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535110 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535117 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535125 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535132 5103 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535139 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535146 5103 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535154 5103 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535161 5103 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535168 5103 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535175 5103 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535183 5103 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535191 5103 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535198 5103 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535207 5103 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535214 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535221 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535228 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535236 5103 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535243 5103 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535250 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535257 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535264 5103 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535274 5103 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535284 5103 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535293 5103 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535302 5103 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535311 5103 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535319 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535328 5103 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535338 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535345 5103 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535353 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535360 5103 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535368 5103 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535375 5103 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535382 5103 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535389 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535395 5103 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535403 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535410 5103 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535417 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535424 5103 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535431 5103 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535438 5103 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535445 5103 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535452 5103 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535459 5103 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535467 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535474 5103 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.535481 5103 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537859 5103 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537876 5103 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537884 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537892 5103 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537901 5103 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537908 5103 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537915 5103 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537923 5103 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537930 5103 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537937 5103 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537944 5103 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537952 5103 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537959 5103 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537967 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537975 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537983 5103 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537990 5103 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.537997 5103 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538008 5103 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538016 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538024 5103 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538033 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538040 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538079 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538090 5103 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538100 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538110 5103 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538119 5103 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538130 5103 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538140 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538151 5103 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538158 5103 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538166 5103 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538173 5103 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538180 5103 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538187 5103 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538195 5103 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538203 5103 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538211 5103 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538218 5103 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538226 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538233 5103 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538241 5103 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538248 5103 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538255 5103 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538264 5103 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538271 5103 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538278 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538286 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538293 5103 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538300 5103 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538307 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538314 5103 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538320 5103 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538328 5103 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538335 5103 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538344 5103 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538353 5103 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538364 5103 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538371 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538378 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538388 5103 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538395 5103 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538402 5103 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538409 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538417 5103 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538424 5103 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538432 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538439 5103 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538446 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538453 5103 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538460 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538467 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538474 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538481 5103 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538488 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538495 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538502 5103 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538509 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538516 5103 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538523 5103 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538530 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538537 5103 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538544 5103 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538551 5103 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.538558 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538685 5103 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538701 5103 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538714 5103 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538724 5103 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538734 5103 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538742 5103 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538753 5103 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538764 5103 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538774 5103 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538783 5103 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538792 5103 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538801 5103 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538811 5103 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538819 5103 flags.go:64] FLAG: --cgroup-root="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538827 5103 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538835 5103 flags.go:64] FLAG: --client-ca-file="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538843 5103 flags.go:64] FLAG: --cloud-config="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538851 5103 flags.go:64] FLAG: --cloud-provider="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538858 5103 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538868 5103 flags.go:64] FLAG: --cluster-domain="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538876 5103 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538885 5103 flags.go:64] FLAG: --config-dir="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538893 5103 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538902 5103 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538912 5103 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538920 5103 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538927 5103 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538936 5103 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538944 5103 flags.go:64] FLAG: --contention-profiling="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538952 5103 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538959 5103 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538968 5103 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538975 5103 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538985 5103 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.538993 5103 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539001 5103 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539009 5103 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539017 5103 flags.go:64] FLAG: --enable-server="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539027 5103 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539037 5103 flags.go:64] FLAG: --event-burst="100" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539045 5103 flags.go:64] FLAG: --event-qps="50" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539080 5103 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539090 5103 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539100 5103 flags.go:64] FLAG: --eviction-hard="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539110 5103 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539118 5103 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539126 5103 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539134 5103 flags.go:64] FLAG: --eviction-soft="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539143 5103 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539150 5103 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539158 5103 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539166 5103 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539174 5103 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539181 5103 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539190 5103 flags.go:64] FLAG: --feature-gates="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539199 5103 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539208 5103 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539216 5103 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539224 5103 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539232 5103 flags.go:64] FLAG: --healthz-port="10248" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539240 5103 flags.go:64] FLAG: --help="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539248 5103 flags.go:64] FLAG: --hostname-override="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539256 5103 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539264 5103 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539272 5103 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539279 5103 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539288 5103 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539296 5103 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539305 5103 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539314 5103 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539321 5103 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539329 5103 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539338 5103 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539345 5103 flags.go:64] FLAG: --kube-reserved="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539354 5103 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539363 5103 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539371 5103 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539379 5103 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539387 5103 flags.go:64] FLAG: --lock-file="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539394 5103 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539402 5103 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539410 5103 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539422 5103 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539430 5103 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539437 5103 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539445 5103 flags.go:64] FLAG: --logging-format="text" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539453 5103 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539461 5103 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539469 5103 flags.go:64] FLAG: --manifest-url="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539477 5103 flags.go:64] FLAG: --manifest-url-header="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539487 5103 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539495 5103 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539506 5103 flags.go:64] FLAG: --max-pods="110" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539514 5103 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539523 5103 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539533 5103 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539544 5103 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539555 5103 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539566 5103 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539575 5103 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539598 5103 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539608 5103 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539619 5103 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539629 5103 flags.go:64] FLAG: --pod-cidr="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539639 5103 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539656 5103 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539665 5103 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539677 5103 flags.go:64] FLAG: --pods-per-core="0" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539686 5103 flags.go:64] FLAG: --port="10250" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539695 5103 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539705 5103 flags.go:64] FLAG: --provider-id="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539715 5103 flags.go:64] FLAG: --qos-reserved="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539725 5103 flags.go:64] FLAG: --read-only-port="10255" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539736 5103 flags.go:64] FLAG: --register-node="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539745 5103 flags.go:64] FLAG: --register-schedulable="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539755 5103 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539772 5103 flags.go:64] FLAG: --registry-burst="10" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539782 5103 flags.go:64] FLAG: --registry-qps="5" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539792 5103 flags.go:64] FLAG: --reserved-cpus="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539801 5103 flags.go:64] FLAG: --reserved-memory="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539813 5103 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539824 5103 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539834 5103 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539841 5103 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539849 5103 flags.go:64] FLAG: --runonce="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539857 5103 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539866 5103 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539874 5103 flags.go:64] FLAG: --seccomp-default="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539883 5103 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539891 5103 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539899 5103 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539907 5103 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539916 5103 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539924 5103 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539931 5103 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539939 5103 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539947 5103 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539955 5103 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539963 5103 flags.go:64] FLAG: --system-cgroups="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539971 5103 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539985 5103 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.539993 5103 flags.go:64] FLAG: --tls-cert-file="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540001 5103 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540022 5103 flags.go:64] FLAG: --tls-min-version="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540030 5103 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540038 5103 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540045 5103 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540092 5103 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540102 5103 flags.go:64] FLAG: --v="2" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540471 5103 flags.go:64] FLAG: --version="false" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540483 5103 flags.go:64] FLAG: --vmodule="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540493 5103 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.540502 5103 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540678 5103 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540687 5103 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540696 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540716 5103 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540723 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540731 5103 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540738 5103 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540746 5103 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540753 5103 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540760 5103 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540767 5103 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540778 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540786 5103 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540793 5103 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540800 5103 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540808 5103 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540815 5103 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540822 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540830 5103 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540837 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540846 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540853 5103 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540860 5103 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540868 5103 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540875 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540882 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540918 5103 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540925 5103 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540932 5103 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540940 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540950 5103 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540958 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540966 5103 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540973 5103 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540981 5103 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540991 5103 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.540998 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541006 5103 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541013 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541020 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541027 5103 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541034 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541041 5103 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541084 5103 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541094 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541102 5103 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541112 5103 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541121 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541128 5103 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541134 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541142 5103 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541149 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541157 5103 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541166 5103 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541173 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541180 5103 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541186 5103 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541194 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541201 5103 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541208 5103 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541215 5103 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541222 5103 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541230 5103 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541237 5103 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541244 5103 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541252 5103 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541260 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541272 5103 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541280 5103 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541289 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541296 5103 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541304 5103 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541314 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541323 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541332 5103 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541343 5103 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541352 5103 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541361 5103 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541370 5103 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541377 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541385 5103 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541392 5103 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541402 5103 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541411 5103 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541419 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.541427 5103 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.541452 5103 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.557110 5103 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.557184 5103 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557276 5103 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557290 5103 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557298 5103 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557305 5103 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557318 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557325 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557332 5103 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557338 5103 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557344 5103 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557350 5103 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557356 5103 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557361 5103 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557367 5103 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557373 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557380 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557386 5103 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557391 5103 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557398 5103 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557404 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557410 5103 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557415 5103 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557421 5103 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557429 5103 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557439 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557445 5103 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557451 5103 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557457 5103 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557464 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557470 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557475 5103 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557481 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557487 5103 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557494 5103 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557502 5103 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557508 5103 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557514 5103 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557520 5103 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557535 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557542 5103 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557548 5103 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557554 5103 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557560 5103 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557568 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557574 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557580 5103 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557585 5103 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557591 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557597 5103 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557603 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557608 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557614 5103 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557620 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557626 5103 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557631 5103 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557637 5103 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557643 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557649 5103 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557654 5103 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557660 5103 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557665 5103 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557671 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557677 5103 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557682 5103 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557688 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557697 5103 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557705 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557711 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557717 5103 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557723 5103 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557729 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557735 5103 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557741 5103 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557747 5103 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557754 5103 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557760 5103 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557765 5103 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557771 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557778 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557783 5103 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557789 5103 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557794 5103 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557800 5103 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557806 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557814 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557820 5103 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.557826 5103 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.557836 5103 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558005 5103 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558016 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558023 5103 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558029 5103 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558036 5103 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558042 5103 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558089 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558096 5103 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558102 5103 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558108 5103 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558114 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558121 5103 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558127 5103 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558132 5103 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558140 5103 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558150 5103 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558157 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558163 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558168 5103 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558174 5103 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558179 5103 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558185 5103 feature_gate.go:328] unrecognized feature gate: Example Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558193 5103 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558200 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558207 5103 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558213 5103 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558221 5103 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558228 5103 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558236 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558243 5103 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558250 5103 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558256 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558262 5103 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558268 5103 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558274 5103 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558280 5103 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558286 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558291 5103 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558297 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558303 5103 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558310 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558316 5103 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558322 5103 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558330 5103 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558337 5103 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558342 5103 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558348 5103 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558353 5103 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558359 5103 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558366 5103 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558372 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558377 5103 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558383 5103 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558389 5103 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558394 5103 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558400 5103 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558406 5103 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558412 5103 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558418 5103 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558424 5103 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558431 5103 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558437 5103 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558444 5103 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558450 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558456 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558463 5103 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558469 5103 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558475 5103 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558481 5103 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558487 5103 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558493 5103 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558499 5103 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558504 5103 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558511 5103 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558516 5103 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558524 5103 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558531 5103 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558537 5103 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558543 5103 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558549 5103 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558555 5103 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558561 5103 feature_gate.go:328] unrecognized feature gate: Example2 Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558569 5103 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558575 5103 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558581 5103 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 00:10:10 crc kubenswrapper[5103]: W0130 00:10:10.558588 5103 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.558599 5103 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.559724 5103 server.go:962] "Client rotation is on, will bootstrap in background" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.565861 5103 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.569349 5103 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.569448 5103 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.570638 5103 server.go:1019] "Starting client certificate rotation" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.570767 5103 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.571618 5103 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.601154 5103 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.605326 5103 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.608112 5103 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.625223 5103 log.go:25] "Validated CRI v1 runtime API" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.684718 5103 log.go:25] "Validated CRI v1 image API" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.687279 5103 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.694085 5103 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-30-00-03-11-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.694131 5103 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.717774 5103 manager.go:217] Machine: {Timestamp:2026-01-30 00:10:10.714926344 +0000 UTC m=+0.586424396 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:b34fa1b9-01b6-49ac-be3d-2edda0be241f BootID:1ea1cdda-a321-4572-bf74-7f3caace2231 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b4:fe:36 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b4:fe:36 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ed:d9:8b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4e:bf:f5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:42:e5:33 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7c:f2:4f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:8d:b2:5f:bf:ec Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:7d:71:6c:74:c5 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.718134 5103 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.718349 5103 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.720458 5103 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.720548 5103 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.720861 5103 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.720882 5103 container_manager_linux.go:306] "Creating device plugin manager" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.720939 5103 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.721776 5103 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.722302 5103 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.722590 5103 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.725158 5103 kubelet.go:491] "Attempting to sync node with API server" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.725638 5103 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.725688 5103 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.725711 5103 kubelet.go:397] "Adding apiserver pod source" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.725744 5103 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.729642 5103 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.729674 5103 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.730590 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.730846 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.731775 5103 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.731804 5103 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.735832 5103 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.736307 5103 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.736877 5103 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737736 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737770 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737777 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737784 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737792 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737800 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737808 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737818 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737829 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737840 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.737871 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.738253 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.739162 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.739179 5103 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.740129 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.762828 5103 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.762890 5103 server.go:1295] "Started kubelet" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.763168 5103 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.763246 5103 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.763374 5103 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.764205 5103 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 00:10:10 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.766240 5103 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.769242 5103 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.771414 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.771882 5103 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.771919 5103 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.772164 5103 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.772281 5103 server.go:317] "Adding debug handlers to kubelet server" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.772555 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.772758 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.773557 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f59b4983a7ea3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,LastTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.792243 5103 factory.go:55] Registering systemd factory Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.792622 5103 factory.go:223] Registration of the systemd container factory successfully Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.794088 5103 factory.go:153] Registering CRI-O factory Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.794134 5103 factory.go:223] Registration of the crio container factory successfully Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.794285 5103 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.794331 5103 factory.go:103] Registering Raw factory Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.794362 5103 manager.go:1196] Started watching for new ooms in manager Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.795676 5103 manager.go:319] Starting recovery of all containers Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.833791 5103 manager.go:324] Recovery completed Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854331 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854429 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854445 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854456 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854465 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854476 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854486 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854519 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854534 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854544 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854556 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854564 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854575 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854624 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854668 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854714 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854724 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854772 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854784 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854791 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854823 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854849 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854858 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854887 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854912 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854950 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854967 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854976 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.854990 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855032 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855084 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855113 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855125 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855121 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855168 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855483 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855496 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855506 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855514 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855523 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855557 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855565 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855572 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855580 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855587 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855595 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855603 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855610 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855619 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855627 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855637 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855645 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855654 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855662 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855684 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855693 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855703 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855762 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855774 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855783 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855796 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855832 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855843 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855852 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855861 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855869 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855878 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855890 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855898 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855929 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855939 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855949 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855957 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855965 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855973 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855982 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.855990 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856004 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856015 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856028 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856040 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856067 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856076 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856085 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856094 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856102 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856111 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856121 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856130 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856139 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856149 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856157 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856166 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856174 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856182 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856191 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856202 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856210 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856218 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856225 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856239 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856260 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856272 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856282 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856292 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856300 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856312 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856321 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856330 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856341 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856352 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856363 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.856371 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.858575 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.858641 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.858747 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.858786 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.858804 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.858826 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.858844 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.860951 5103 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.862332 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.862394 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.862416 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.866703 5103 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.866853 5103 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.866916 5103 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.866961 5103 kubelet.go:2451] "Starting kubelet main sync loop" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.867168 5103 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868114 5103 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868197 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868217 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868238 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868252 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868267 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868283 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868309 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868319 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868330 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868342 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868354 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868370 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868382 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868397 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868419 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868437 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868452 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868464 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868477 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868540 5103 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868558 5103 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868777 5103 state_mem.go:36] "Initialized new in-memory state store" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868871 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868944 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.868988 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869135 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869170 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869191 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869226 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869296 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869345 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869378 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869406 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869443 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869501 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869543 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869566 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869592 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869613 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869821 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869871 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869901 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869940 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.869968 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870005 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870025 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870103 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870137 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870151 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870169 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870183 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870198 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870211 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870226 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870241 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870254 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870269 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870282 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870301 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870315 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870328 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870342 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.870327 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870358 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870406 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870435 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870463 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870492 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870519 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870545 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870566 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870593 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870615 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870641 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870673 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870792 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870819 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870841 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870866 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870888 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870925 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870954 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870975 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.870999 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.871019 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.871045 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.871092 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.871111 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.871138 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.871272 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872821 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872867 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872885 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872899 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872913 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872927 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.872925 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872939 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872955 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872972 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.872986 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873001 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873015 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873028 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873041 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873102 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873118 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873132 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873146 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873159 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873229 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873240 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873253 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873288 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873303 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873316 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873332 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873344 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873356 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873368 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873380 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873393 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873406 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873435 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873448 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873460 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873475 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873487 5103 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873499 5103 reconstruct.go:97] "Volume reconstruction finished" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.873508 5103 reconciler.go:26] "Reconciler: start to sync state" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.878696 5103 policy_none.go:49] "None policy: Start" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.878723 5103 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.878740 5103 state_mem.go:35] "Initializing new in-memory state store" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.933330 5103 manager.go:341] "Starting Device Plugin manager" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.933693 5103 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.933708 5103 server.go:85] "Starting device plugin registration server" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.934241 5103 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.934256 5103 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.934424 5103 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.934642 5103 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.934666 5103 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.939065 5103 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.939137 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.967891 5103 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.968192 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.970152 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.970209 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.970230 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.971315 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.971471 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.971516 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.972170 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.972206 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.972233 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.972233 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.972253 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.972268 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: E0130 00:10:10.973548 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.973667 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.973734 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.973775 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974089 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974147 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974249 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974279 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974323 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974348 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974358 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974390 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974395 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974409 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974552 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974586 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974604 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974596 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974660 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.974693 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.975761 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.975811 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.975969 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.976780 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.976820 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.976838 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.976900 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.977088 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.977166 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.977186 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.977865 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.978066 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.978136 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.978839 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.978877 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.978940 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.979693 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.979727 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.979740 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.980149 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.980186 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.980737 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.980765 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:10 crc kubenswrapper[5103]: I0130 00:10:10.980775 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.012041 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.019456 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.034999 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.035841 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.036153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.036343 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.036519 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.037379 5103 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.038602 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.058795 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.070637 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.075924 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.075975 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076024 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076161 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076231 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076233 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076257 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076284 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076325 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076343 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076412 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076423 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076472 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076528 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076531 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076610 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076686 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076692 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076726 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076768 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076795 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076810 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076827 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076838 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076932 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.076969 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077004 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077026 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077034 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077120 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077157 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077224 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077296 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077362 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077402 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.077829 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.078931 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.178121 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.178218 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.178273 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.178330 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.178374 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.178433 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.178509 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.179072 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.179166 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.179188 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.179108 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.179252 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.179266 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.179307 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.237958 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.239396 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.239466 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.239492 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.239540 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.240277 5103 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.313284 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.319945 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.339928 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.359950 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.372452 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.375253 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Jan 30 00:10:11 crc kubenswrapper[5103]: W0130 00:10:11.381105 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-7c8938edc67f83fd0a602c8ee1b124dbba440d709435e418ed9db65ce5ee5eba WatchSource:0}: Error finding container 7c8938edc67f83fd0a602c8ee1b124dbba440d709435e418ed9db65ce5ee5eba: Status 404 returned error can't find the container with id 7c8938edc67f83fd0a602c8ee1b124dbba440d709435e418ed9db65ce5ee5eba Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.387241 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:10:11 crc kubenswrapper[5103]: W0130 00:10:11.389547 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-4c384091fc366b4f1b10437b0c17b49be9b4a3c74aa9641c55163ac9739bd742 WatchSource:0}: Error finding container 4c384091fc366b4f1b10437b0c17b49be9b4a3c74aa9641c55163ac9739bd742: Status 404 returned error can't find the container with id 4c384091fc366b4f1b10437b0c17b49be9b4a3c74aa9641c55163ac9739bd742 Jan 30 00:10:11 crc kubenswrapper[5103]: W0130 00:10:11.407090 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-2e5f162dce23e98c1b1ee1f592dbc49612cb43fd4a9dcf3a111bdb6dd20da921 WatchSource:0}: Error finding container 2e5f162dce23e98c1b1ee1f592dbc49612cb43fd4a9dcf3a111bdb6dd20da921: Status 404 returned error can't find the container with id 2e5f162dce23e98c1b1ee1f592dbc49612cb43fd4a9dcf3a111bdb6dd20da921 Jan 30 00:10:11 crc kubenswrapper[5103]: W0130 00:10:11.417285 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-6be8f08a6b0e245fd0e4b285555a6d5d8496d1e7a70284f428953d8c677d7a02 WatchSource:0}: Error finding container 6be8f08a6b0e245fd0e4b285555a6d5d8496d1e7a70284f428953d8c677d7a02: Status 404 returned error can't find the container with id 6be8f08a6b0e245fd0e4b285555a6d5d8496d1e7a70284f428953d8c677d7a02 Jan 30 00:10:11 crc kubenswrapper[5103]: W0130 00:10:11.420543 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-ea0a225e896f0835fa37f8c50596daa3d6862023daf510b197074727debd70fa WatchSource:0}: Error finding container ea0a225e896f0835fa37f8c50596daa3d6862023daf510b197074727debd70fa: Status 404 returned error can't find the container with id ea0a225e896f0835fa37f8c50596daa3d6862023daf510b197074727debd70fa Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.640708 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.642152 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.642194 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.642208 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.642239 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.642914 5103 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.741242 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.873367 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ea0a225e896f0835fa37f8c50596daa3d6862023daf510b197074727debd70fa"} Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.874456 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6be8f08a6b0e245fd0e4b285555a6d5d8496d1e7a70284f428953d8c677d7a02"} Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.875556 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2e5f162dce23e98c1b1ee1f592dbc49612cb43fd4a9dcf3a111bdb6dd20da921"} Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.877436 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4c384091fc366b4f1b10437b0c17b49be9b4a3c74aa9641c55163ac9739bd742"} Jan 30 00:10:11 crc kubenswrapper[5103]: I0130 00:10:11.878887 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"7c8938edc67f83fd0a602c8ee1b124dbba440d709435e418ed9db65ce5ee5eba"} Jan 30 00:10:11 crc kubenswrapper[5103]: E0130 00:10:11.907888 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.089788 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.176410 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.204466 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.322323 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.443624 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.444656 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.444739 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.444762 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.444803 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.445557 5103 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.638644 5103 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.641882 5103 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.741278 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.884757 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65"} Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.884805 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9"} Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.887477 5103 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541" exitCode=0 Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.887598 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541"} Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.887606 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.888404 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.888462 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.888480 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.888767 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.890109 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2" exitCode=0 Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.890167 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2"} Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.890401 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.891554 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.891606 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.891637 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.891952 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.892376 5103 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d" exitCode=0 Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.892465 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d"} Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.892674 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.893481 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.894309 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.894353 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.894372 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.894466 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.894490 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.894501 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.894632 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.895744 5103 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e" exitCode=0 Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.895788 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e"} Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.895923 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.896773 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.896811 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:12 crc kubenswrapper[5103]: I0130 00:10:12.896839 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:12 crc kubenswrapper[5103]: E0130 00:10:12.897109 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.741621 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Jan 30 00:10:13 crc kubenswrapper[5103]: E0130 00:10:13.777692 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.903614 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.903658 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.903667 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.903749 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.906010 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.906075 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.906086 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:13 crc kubenswrapper[5103]: E0130 00:10:13.906452 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.910189 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.910209 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.910219 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.914278 5103 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b" exitCode=0 Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.914330 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.914505 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.915448 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.915489 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.915503 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:13 crc kubenswrapper[5103]: E0130 00:10:13.915743 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.916621 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.916722 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.917323 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.917354 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.917368 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:13 crc kubenswrapper[5103]: E0130 00:10:13.918199 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.923339 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.923375 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514"} Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.923527 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.924556 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.924601 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:13 crc kubenswrapper[5103]: I0130 00:10:13.924615 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:13 crc kubenswrapper[5103]: E0130 00:10:13.924840 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.046532 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.048129 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.048186 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.048200 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.048232 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.048898 5103 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.050534 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.172374 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.442555 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.931920 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa"} Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.931989 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6"} Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.932120 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.933096 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.933160 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.933187 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.933732 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.935581 5103 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4" exitCode=0 Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.935636 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4"} Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.935780 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.935833 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.935895 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.935861 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.935856 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936680 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936714 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936757 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936777 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936723 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936829 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936759 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936892 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936911 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936941 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936973 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:14 crc kubenswrapper[5103]: I0130 00:10:14.936984 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.937119 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.937334 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.937512 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:14 crc kubenswrapper[5103]: E0130 00:10:14.938288 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.946227 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301"} Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.946293 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383"} Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.946317 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020"} Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.946375 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.946564 5103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.946641 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.947548 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.947619 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.947647 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.947700 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.947753 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:15 crc kubenswrapper[5103]: I0130 00:10:15.947773 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:15 crc kubenswrapper[5103]: E0130 00:10:15.948342 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:15 crc kubenswrapper[5103]: E0130 00:10:15.948798 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.602516 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.878130 5103 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.961404 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c"} Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.961505 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226"} Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.961679 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.962223 5103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.962355 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.963100 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.963164 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.963188 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.963459 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.963540 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:16 crc kubenswrapper[5103]: I0130 00:10:16.963605 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:16 crc kubenswrapper[5103]: E0130 00:10:16.963653 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:16 crc kubenswrapper[5103]: E0130 00:10:16.964231 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.249366 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.251375 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.251573 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.251842 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.252035 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.964597 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.966559 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.966634 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.966664 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:17 crc kubenswrapper[5103]: E0130 00:10:17.967492 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.968479 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.968957 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.970313 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.970413 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.970482 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:17 crc kubenswrapper[5103]: E0130 00:10:17.971484 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:17 crc kubenswrapper[5103]: I0130 00:10:17.980777 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.945712 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.961006 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.968043 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.968116 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.969311 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.969379 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.969325 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.969406 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.969504 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:18 crc kubenswrapper[5103]: I0130 00:10:18.969544 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:18 crc kubenswrapper[5103]: E0130 00:10:18.970031 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:18 crc kubenswrapper[5103]: E0130 00:10:18.970508 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.507304 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.507611 5103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.507670 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.509032 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.509164 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.509195 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:19 crc kubenswrapper[5103]: E0130 00:10:19.509917 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.970760 5103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.970864 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.972301 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.972382 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:19 crc kubenswrapper[5103]: I0130 00:10:19.972403 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:19 crc kubenswrapper[5103]: E0130 00:10:19.973154 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.457940 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.458335 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.459616 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.459726 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.459768 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5103]: E0130 00:10:20.460496 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.931225 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.931595 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.933172 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.933277 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.933310 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5103]: E0130 00:10:20.934193 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.939220 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:20 crc kubenswrapper[5103]: E0130 00:10:20.939456 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.973816 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.974962 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.975083 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:20 crc kubenswrapper[5103]: I0130 00:10:20.975107 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:20 crc kubenswrapper[5103]: E0130 00:10:20.975694 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:21 crc kubenswrapper[5103]: I0130 00:10:21.458808 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:21 crc kubenswrapper[5103]: I0130 00:10:21.976628 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:21 crc kubenswrapper[5103]: I0130 00:10:21.978013 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:21 crc kubenswrapper[5103]: I0130 00:10:21.978086 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:21 crc kubenswrapper[5103]: I0130 00:10:21.978096 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:21 crc kubenswrapper[5103]: E0130 00:10:21.978460 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:24 crc kubenswrapper[5103]: I0130 00:10:24.459306 5103 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:10:24 crc kubenswrapper[5103]: I0130 00:10:24.459431 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 00:10:24 crc kubenswrapper[5103]: I0130 00:10:24.743286 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 00:10:25 crc kubenswrapper[5103]: I0130 00:10:25.005628 5103 trace.go:236] Trace[750041310]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:15.003) (total time: 10001ms): Jan 30 00:10:25 crc kubenswrapper[5103]: Trace[750041310]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:10:25.005) Jan 30 00:10:25 crc kubenswrapper[5103]: Trace[750041310]: [10.001909466s] [10.001909466s] END Jan 30 00:10:25 crc kubenswrapper[5103]: E0130 00:10:25.005709 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:25 crc kubenswrapper[5103]: E0130 00:10:25.912303 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188f59b4983a7ea3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,LastTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:26 crc kubenswrapper[5103]: I0130 00:10:26.070653 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:26 crc kubenswrapper[5103]: I0130 00:10:26.070753 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:10:26 crc kubenswrapper[5103]: I0130 00:10:26.080984 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:26 crc kubenswrapper[5103]: I0130 00:10:26.081129 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 00:10:26 crc kubenswrapper[5103]: I0130 00:10:26.623432 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]log ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]etcd ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-informers ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/crd-informer-synced ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 00:10:26 crc kubenswrapper[5103]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 00:10:26 crc kubenswrapper[5103]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/bootstrap-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/apiservice-registration-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]autoregister-completion ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 00:10:26 crc kubenswrapper[5103]: livez check failed Jan 30 00:10:26 crc kubenswrapper[5103]: I0130 00:10:26.623652 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:10:26 crc kubenswrapper[5103]: E0130 00:10:26.979020 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 00:10:28 crc kubenswrapper[5103]: I0130 00:10:28.983604 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 00:10:28 crc kubenswrapper[5103]: I0130 00:10:28.984095 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:28 crc kubenswrapper[5103]: I0130 00:10:28.985468 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:28 crc kubenswrapper[5103]: I0130 00:10:28.985515 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:28 crc kubenswrapper[5103]: I0130 00:10:28.985535 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:28 crc kubenswrapper[5103]: E0130 00:10:28.986307 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:29 crc kubenswrapper[5103]: I0130 00:10:29.004170 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 00:10:29 crc kubenswrapper[5103]: I0130 00:10:29.004455 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:29 crc kubenswrapper[5103]: I0130 00:10:29.005281 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:29 crc kubenswrapper[5103]: I0130 00:10:29.005363 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:29 crc kubenswrapper[5103]: I0130 00:10:29.005390 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:29 crc kubenswrapper[5103]: E0130 00:10:29.006278 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:29 crc kubenswrapper[5103]: E0130 00:10:29.274837 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:30 crc kubenswrapper[5103]: E0130 00:10:30.939756 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.079045 5103 trace.go:236] Trace[2027001053]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:19.761) (total time: 11317ms): Jan 30 00:10:31 crc kubenswrapper[5103]: Trace[2027001053]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 11317ms (00:10:31.078) Jan 30 00:10:31 crc kubenswrapper[5103]: Trace[2027001053]: [11.31770143s] [11.31770143s] END Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.079585 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.079769 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.079967 5103 trace.go:236] Trace[421258559]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:18.023) (total time: 13056ms): Jan 30 00:10:31 crc kubenswrapper[5103]: Trace[421258559]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 13056ms (00:10:31.079) Jan 30 00:10:31 crc kubenswrapper[5103]: Trace[421258559]: [13.056384234s] [13.056384234s] END Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.080037 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.080380 5103 trace.go:236] Trace[1562526466]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 00:10:20.681) (total time: 10399ms): Jan 30 00:10:31 crc kubenswrapper[5103]: Trace[1562526466]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 10399ms (00:10:31.080) Jan 30 00:10:31 crc kubenswrapper[5103]: Trace[1562526466]: [10.399182384s] [10.399182384s] END Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.080429 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.093819 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.106437 5103 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.151940 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42756->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.152034 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42756->192.168.126.11:17697: read: connection reset by peer" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.152107 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.152202 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.464319 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.464549 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.465480 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.465603 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.465671 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.466123 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.469213 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.471968 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.613458 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.613993 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.614416 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.614467 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.615136 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.615204 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.615223 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:31 crc kubenswrapper[5103]: E0130 00:10:31.615821 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.618875 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:31 crc kubenswrapper[5103]: I0130 00:10:31.747882 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.014141 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017165 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa" exitCode=255 Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017383 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa"} Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017436 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.017535 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018414 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018711 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:32 crc kubenswrapper[5103]: E0130 00:10:32.018766 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018779 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.018800 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:32 crc kubenswrapper[5103]: E0130 00:10:32.019367 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.019924 5103 scope.go:117] "RemoveContainer" containerID="65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa" Jan 30 00:10:32 crc kubenswrapper[5103]: I0130 00:10:32.752970 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.021251 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023157 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b"} Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023292 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023483 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023939 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.023991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024011 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024305 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024395 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.024428 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:33 crc kubenswrapper[5103]: E0130 00:10:33.024587 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:33 crc kubenswrapper[5103]: E0130 00:10:33.025123 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:33 crc kubenswrapper[5103]: E0130 00:10:33.384540 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:33 crc kubenswrapper[5103]: I0130 00:10:33.747450 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.028980 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.029746 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032029 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" exitCode=255 Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032145 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b"} Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032303 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.032318 5103 scope.go:117] "RemoveContainer" containerID="65363b989b574f53d3f93658c788e813617b988cbde0215f87ecc7dfd9d34caa" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.033701 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.033742 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.033781 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:34 crc kubenswrapper[5103]: E0130 00:10:34.034157 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.034474 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:34 crc kubenswrapper[5103]: E0130 00:10:34.034659 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:34 crc kubenswrapper[5103]: I0130 00:10:34.747267 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.038453 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.041264 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.042448 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.042494 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.042511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.043003 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.043369 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.043596 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:35 crc kubenswrapper[5103]: I0130 00:10:35.749090 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.920149 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b4983a7ea3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,LastTimestamp:2026-01-30 00:10:10.762849955 +0000 UTC m=+0.634347997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.927164 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.934357 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.942464 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.950420 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b4a294443d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.936505405 +0000 UTC m=+0.808003457,LastTimestamp:2026-01-30 00:10:10.936505405 +0000 UTC m=+0.808003457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.956434 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.970181731 +0000 UTC m=+0.841679823,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.963558 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.970219492 +0000 UTC m=+0.841717584,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.971504 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.970238933 +0000 UTC m=+0.841737025,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.978506 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.972207241 +0000 UTC m=+0.843705333,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.985538 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.972222282 +0000 UTC m=+0.843720364,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.992691 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.972243102 +0000 UTC m=+0.843741184,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:35 crc kubenswrapper[5103]: E0130 00:10:35.997972 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.972258553 +0000 UTC m=+0.843756645,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.009609 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.972263333 +0000 UTC m=+0.843761415,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.011261 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.972278423 +0000 UTC m=+0.843776515,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.017141 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.974378705 +0000 UTC m=+0.845876797,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.022781 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.974400906 +0000 UTC m=+0.845898998,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.029750 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.974418466 +0000 UTC m=+0.845916558,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.037094 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.97457215 +0000 UTC m=+0.846070242,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.044705 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.974596371 +0000 UTC m=+0.846094463,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.051029 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.974613531 +0000 UTC m=+0.846111613,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.056323 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.976803996 +0000 UTC m=+0.848302068,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.062655 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.976829846 +0000 UTC m=+0.848327908,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.068734 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2a035b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2a035b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862433115 +0000 UTC m=+0.733931217,LastTimestamp:2026-01-30 00:10:10.976846157 +0000 UTC m=+0.848344229,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.073858 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e2919a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e2919a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862373283 +0000 UTC m=+0.733871365,LastTimestamp:2026-01-30 00:10:10.977113343 +0000 UTC m=+0.848611435,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.079215 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188f59b49e299ffa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188f59b49e299ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:10.862407674 +0000 UTC m=+0.733905746,LastTimestamp:2026-01-30 00:10:10.977177895 +0000 UTC m=+0.848675987,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.089406 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b4bd7928ec openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.387713772 +0000 UTC m=+1.259211844,LastTimestamp:2026-01-30 00:10:11.387713772 +0000 UTC m=+1.259211844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.094941 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b4be8972e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.405558505 +0000 UTC m=+1.277056577,LastTimestamp:2026-01-30 00:10:11.405558505 +0000 UTC m=+1.277056577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.100193 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b4bee071d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.411259856 +0000 UTC m=+1.282757928,LastTimestamp:2026-01-30 00:10:11.411259856 +0000 UTC m=+1.282757928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.106851 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b4bf954ddf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.423112671 +0000 UTC m=+1.294610733,LastTimestamp:2026-01-30 00:10:11.423112671 +0000 UTC m=+1.294610733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.114860 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4bf9b8029 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:11.423518761 +0000 UTC m=+1.295016843,LastTimestamp:2026-01-30 00:10:11.423518761 +0000 UTC m=+1.295016843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.121339 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4e8373f24 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.104814372 +0000 UTC m=+1.976312424,LastTimestamp:2026-01-30 00:10:12.104814372 +0000 UTC m=+1.976312424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.127093 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b4e838fa32 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.104927794 +0000 UTC m=+1.976425856,LastTimestamp:2026-01-30 00:10:12.104927794 +0000 UTC m=+1.976425856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.132969 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b4e83976bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.104959675 +0000 UTC m=+1.976457727,LastTimestamp:2026-01-30 00:10:12.104959675 +0000 UTC m=+1.976457727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.138832 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b4e843a7c8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.105627592 +0000 UTC m=+1.977125644,LastTimestamp:2026-01-30 00:10:12.105627592 +0000 UTC m=+1.977125644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.148531 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b4e8a2fad9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.111874777 +0000 UTC m=+1.983372829,LastTimestamp:2026-01-30 00:10:12.111874777 +0000 UTC m=+1.983372829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.154780 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4e9083142 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.118507842 +0000 UTC m=+1.990005894,LastTimestamp:2026-01-30 00:10:12.118507842 +0000 UTC m=+1.990005894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.162086 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4e91b62e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.119765733 +0000 UTC m=+1.991263785,LastTimestamp:2026-01-30 00:10:12.119765733 +0000 UTC m=+1.991263785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.168333 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b4e921b73f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.120180543 +0000 UTC m=+1.991678595,LastTimestamp:2026-01-30 00:10:12.120180543 +0000 UTC m=+1.991678595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.175438 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b4e9260c2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.12046443 +0000 UTC m=+1.991962492,LastTimestamp:2026-01-30 00:10:12.12046443 +0000 UTC m=+1.991962492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.182134 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b4e92d34a6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.120933542 +0000 UTC m=+1.992431604,LastTimestamp:2026-01-30 00:10:12.120933542 +0000 UTC m=+1.992431604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.189491 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b4e9b6f3b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.129960886 +0000 UTC m=+2.001458938,LastTimestamp:2026-01-30 00:10:12.129960886 +0000 UTC m=+2.001458938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.195446 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4fc500300 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.441981696 +0000 UTC m=+2.313479788,LastTimestamp:2026-01-30 00:10:12.441981696 +0000 UTC m=+2.313479788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.203245 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4fd49a5dc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.458341852 +0000 UTC m=+2.329839944,LastTimestamp:2026-01-30 00:10:12.458341852 +0000 UTC m=+2.329839944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.208713 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b4fd645444 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.460090436 +0000 UTC m=+2.331588528,LastTimestamp:2026-01-30 00:10:12.460090436 +0000 UTC m=+2.331588528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.215024 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b51702e6e4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.88991306 +0000 UTC m=+2.761411142,LastTimestamp:2026-01-30 00:10:12.88991306 +0000 UTC m=+2.761411142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.220524 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b517328668 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.893034088 +0000 UTC m=+2.764532140,LastTimestamp:2026-01-30 00:10:12.893034088 +0000 UTC m=+2.764532140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.222962 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b51759fd82 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.895620482 +0000 UTC m=+2.767118534,LastTimestamp:2026-01-30 00:10:12.895620482 +0000 UTC m=+2.767118534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.230256 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b517821137 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:12.898246967 +0000 UTC m=+2.769745029,LastTimestamp:2026-01-30 00:10:12.898246967 +0000 UTC m=+2.769745029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.236286 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b52de806dd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.274027741 +0000 UTC m=+3.145525803,LastTimestamp:2026-01-30 00:10:13.274027741 +0000 UTC m=+3.145525803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.243387 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b52df8ab52 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.275118418 +0000 UTC m=+3.146616480,LastTimestamp:2026-01-30 00:10:13.275118418 +0000 UTC m=+3.146616480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.248451 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b52df9d6be openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.27519507 +0000 UTC m=+3.146693132,LastTimestamp:2026-01-30 00:10:13.27519507 +0000 UTC m=+3.146693132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.256118 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b52e02f091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.275791505 +0000 UTC m=+3.147289567,LastTimestamp:2026-01-30 00:10:13.275791505 +0000 UTC m=+3.147289567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.263182 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b52e076a92 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.276084882 +0000 UTC m=+3.147582944,LastTimestamp:2026-01-30 00:10:13.276084882 +0000 UTC m=+3.147582944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.270436 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b52f95b8d9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.302188249 +0000 UTC m=+3.173686311,LastTimestamp:2026-01-30 00:10:13.302188249 +0000 UTC m=+3.173686311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.277336 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b52fa69c33 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.303295027 +0000 UTC m=+3.174793089,LastTimestamp:2026-01-30 00:10:13.303295027 +0000 UTC m=+3.174793089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.282139 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b52fd44051 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.306286161 +0000 UTC m=+3.177784223,LastTimestamp:2026-01-30 00:10:13.306286161 +0000 UTC m=+3.177784223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.289754 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b52fe11975 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.307128181 +0000 UTC m=+3.178626243,LastTimestamp:2026-01-30 00:10:13.307128181 +0000 UTC m=+3.178626243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.297165 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b52fe93695 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.307659925 +0000 UTC m=+3.179157977,LastTimestamp:2026-01-30 00:10:13.307659925 +0000 UTC m=+3.179157977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.304032 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f59b52fed3d9f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.307923871 +0000 UTC m=+3.179421933,LastTimestamp:2026-01-30 00:10:13.307923871 +0000 UTC m=+3.179421933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.310522 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b52ff86853 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.308655699 +0000 UTC m=+3.180153761,LastTimestamp:2026-01-30 00:10:13.308655699 +0000 UTC m=+3.180153761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.317806 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5315c1f35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.331967797 +0000 UTC m=+3.203465859,LastTimestamp:2026-01-30 00:10:13.331967797 +0000 UTC m=+3.203465859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.325312 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b53e7847ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.551917039 +0000 UTC m=+3.423415091,LastTimestamp:2026-01-30 00:10:13.551917039 +0000 UTC m=+3.423415091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.329658 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b53e789174 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.55193586 +0000 UTC m=+3.423433912,LastTimestamp:2026-01-30 00:10:13.55193586 +0000 UTC m=+3.423433912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.333216 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b53f79d049 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.568794697 +0000 UTC m=+3.440292759,LastTimestamp:2026-01-30 00:10:13.568794697 +0000 UTC m=+3.440292759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.338007 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b53f9d3415 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.571114005 +0000 UTC m=+3.442612057,LastTimestamp:2026-01-30 00:10:13.571114005 +0000 UTC m=+3.442612057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.341156 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b53fa1b992 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.571410322 +0000 UTC m=+3.442908384,LastTimestamp:2026-01-30 00:10:13.571410322 +0000 UTC m=+3.442908384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.345520 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b53fb011f7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.572350455 +0000 UTC m=+3.443848507,LastTimestamp:2026-01-30 00:10:13.572350455 +0000 UTC m=+3.443848507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.351837 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54134fa35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.597837877 +0000 UTC m=+3.469335929,LastTimestamp:2026-01-30 00:10:13.597837877 +0000 UTC m=+3.469335929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.357196 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b5414848a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.599103139 +0000 UTC m=+3.470601191,LastTimestamp:2026-01-30 00:10:13.599103139 +0000 UTC m=+3.470601191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.363729 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54d724d25 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.803183397 +0000 UTC m=+3.674681449,LastTimestamp:2026-01-30 00:10:13.803183397 +0000 UTC m=+3.674681449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.370641 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b54d93a4d3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.805368531 +0000 UTC m=+3.676866583,LastTimestamp:2026-01-30 00:10:13.805368531 +0000 UTC m=+3.676866583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.378679 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188f59b54e6fd1f9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.819798009 +0000 UTC m=+3.691296061,LastTimestamp:2026-01-30 00:10:13.819798009 +0000 UTC m=+3.691296061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.386421 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54eaa18db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.823617243 +0000 UTC m=+3.695115285,LastTimestamp:2026-01-30 00:10:13.823617243 +0000 UTC m=+3.695115285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.393896 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b54ec37a7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.825280635 +0000 UTC m=+3.696778687,LastTimestamp:2026-01-30 00:10:13.825280635 +0000 UTC m=+3.696778687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.403077 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b554529798 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:13.918545816 +0000 UTC m=+3.790043868,LastTimestamp:2026-01-30 00:10:13.918545816 +0000 UTC m=+3.790043868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.410582 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55c060bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.047746999 +0000 UTC m=+3.919245051,LastTimestamp:2026-01-30 00:10:14.047746999 +0000 UTC m=+3.919245051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.419842 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d4fdeb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.069362355 +0000 UTC m=+3.940860407,LastTimestamp:2026-01-30 00:10:14.069362355 +0000 UTC m=+3.940860407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.425754 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d68c6d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,LastTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.432175 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b56219f89e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.149716126 +0000 UTC m=+4.021214178,LastTimestamp:2026-01-30 00:10:14.149716126 +0000 UTC m=+4.021214178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.433519 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5636c0001 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.171869185 +0000 UTC m=+4.043367227,LastTimestamp:2026-01-30 00:10:14.171869185 +0000 UTC m=+4.043367227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.439969 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56c34dc06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,LastTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.447381 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56d089716 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,LastTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.455578 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b59120b03e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.938685502 +0000 UTC m=+4.810183584,LastTimestamp:2026-01-30 00:10:14.938685502 +0000 UTC m=+4.810183584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.461352 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5a2b16a4a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.233382986 +0000 UTC m=+5.104881038,LastTimestamp:2026-01-30 00:10:15.233382986 +0000 UTC m=+5.104881038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.468348 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5a3943013 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.248244755 +0000 UTC m=+5.119742797,LastTimestamp:2026-01-30 00:10:15.248244755 +0000 UTC m=+5.119742797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.474752 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5a3a789e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.249512936 +0000 UTC m=+5.121010988,LastTimestamp:2026-01-30 00:10:15.249512936 +0000 UTC m=+5.121010988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.481909 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5b40834d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.524283606 +0000 UTC m=+5.395781698,LastTimestamp:2026-01-30 00:10:15.524283606 +0000 UTC m=+5.395781698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.489295 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5b50512e6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.540855526 +0000 UTC m=+5.412353618,LastTimestamp:2026-01-30 00:10:15.540855526 +0000 UTC m=+5.412353618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.495353 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5b51af0da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.542288602 +0000 UTC m=+5.413786694,LastTimestamp:2026-01-30 00:10:15.542288602 +0000 UTC m=+5.413786694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.504873 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5c4c5c5eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.805142507 +0000 UTC m=+5.676640609,LastTimestamp:2026-01-30 00:10:15.805142507 +0000 UTC m=+5.676640609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.512286 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5c5bd4e67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.821364839 +0000 UTC m=+5.692862941,LastTimestamp:2026-01-30 00:10:15.821364839 +0000 UTC m=+5.692862941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.519509 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5c5d59099 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:15.822954649 +0000 UTC m=+5.694452741,LastTimestamp:2026-01-30 00:10:15.822954649 +0000 UTC m=+5.694452741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.527295 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5d5f95ff6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.09373695 +0000 UTC m=+5.965235022,LastTimestamp:2026-01-30 00:10:16.09373695 +0000 UTC m=+5.965235022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.538329 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5d6d2d14a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.107987274 +0000 UTC m=+5.979485336,LastTimestamp:2026-01-30 00:10:16.107987274 +0000 UTC m=+5.979485336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.545341 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5d6ee5e6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.109792878 +0000 UTC m=+5.981290940,LastTimestamp:2026-01-30 00:10:16.109792878 +0000 UTC m=+5.981290940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.552793 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5e4b4ce79 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.340901497 +0000 UTC m=+6.212399549,LastTimestamp:2026-01-30 00:10:16.340901497 +0000 UTC m=+6.212399549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.560014 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188f59b5e5c29605 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:16.358581765 +0000 UTC m=+6.230079807,LastTimestamp:2026-01-30 00:10:16.358581765 +0000 UTC m=+6.230079807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.570339 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-controller-manager-crc.188f59b7c89b1d7b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.459390331 +0000 UTC m=+14.330888423,LastTimestamp:2026-01-30 00:10:24.459390331 +0000 UTC m=+14.330888423,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.578029 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188f59b7c89d04d6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:24.459515094 +0000 UTC m=+14.331013176,LastTimestamp:2026-01-30 00:10:24.459515094 +0000 UTC m=+14.331013176,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.585447 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b828a5f55a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:10:36 crc kubenswrapper[5103]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:36 crc kubenswrapper[5103]: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.07071369 +0000 UTC m=+15.942211782,LastTimestamp:2026-01-30 00:10:26.07071369 +0000 UTC m=+15.942211782,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.592745 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b828a70725 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.070783781 +0000 UTC m=+15.942281873,LastTimestamp:2026-01-30 00:10:26.070783781 +0000 UTC m=+15.942281873,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.599981 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b828a5f55a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b828a5f55a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 30 00:10:36 crc kubenswrapper[5103]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 00:10:36 crc kubenswrapper[5103]: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.07071369 +0000 UTC m=+15.942211782,LastTimestamp:2026-01-30 00:10:26.081045266 +0000 UTC m=+15.952543358,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.608208 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b828a70725\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b828a70725 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.070783781 +0000 UTC m=+15.942281873,LastTimestamp:2026-01-30 00:10:26.081164359 +0000 UTC m=+15.952662451,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.617842 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b84999afeb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Jan 30 00:10:36 crc kubenswrapper[5103]: body: [+]ping ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]log ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]etcd ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-informers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/crd-informer-synced ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 00:10:36 crc kubenswrapper[5103]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/bootstrap-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-registration-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]autoregister-completion ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 00:10:36 crc kubenswrapper[5103]: livez check failed Jan 30 00:10:36 crc kubenswrapper[5103]: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.623557611 +0000 UTC m=+16.495055713,LastTimestamp:2026-01-30 00:10:26.623557611 +0000 UTC m=+16.495055713,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.626480 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b8499d0e67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:26.623778407 +0000 UTC m=+16.495276499,LastTimestamp:2026-01-30 00:10:26.623778407 +0000 UTC m=+16.495276499,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.634756 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b957844c12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:42756->192.168.126.11:17697: read: connection reset by peer Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.152004114 +0000 UTC m=+21.023502166,LastTimestamp:2026-01-30 00:10:31.152004114 +0000 UTC m=+21.023502166,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.645321 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b957856fd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42756->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.152078806 +0000 UTC m=+21.023576858,LastTimestamp:2026-01-30 00:10:31.152078806 +0000 UTC m=+21.023576858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.653531 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b95786e976 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.152175478 +0000 UTC m=+21.023673540,LastTimestamp:2026-01-30 00:10:31.152175478 +0000 UTC m=+21.023673540,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.661409 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b95788079a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42772->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.15224873 +0000 UTC m=+21.023746792,LastTimestamp:2026-01-30 00:10:31.15224873 +0000 UTC m=+21.023746792,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.668666 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 30 00:10:36 crc kubenswrapper[5103]: &Event{ObjectMeta:{kube-apiserver-crc.188f59b97314abd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 30 00:10:36 crc kubenswrapper[5103]: body: Jan 30 00:10:36 crc kubenswrapper[5103]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.614450646 +0000 UTC m=+21.485948708,LastTimestamp:2026-01-30 00:10:31.614450646 +0000 UTC m=+21.485948708,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 00:10:36 crc kubenswrapper[5103]: > Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.675954 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b973154c31 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:31.614491697 +0000 UTC m=+21.485989749,LastTimestamp:2026-01-30 00:10:31.614491697 +0000 UTC m=+21.485989749,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.684470 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b55d68c6d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d68c6d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,LastTimestamp:2026-01-30 00:10:32.021964627 +0000 UTC m=+21.893462679,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.692450 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56c34dc06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56c34dc06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,LastTimestamp:2026-01-30 00:10:32.244499073 +0000 UTC m=+22.115997145,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.698713 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56d089716\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56d089716 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,LastTimestamp:2026-01-30 00:10:32.256814808 +0000 UTC m=+22.128312850,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.706522 5103 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: E0130 00:10:36.714255 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:35.04356059 +0000 UTC m=+24.915058652,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:36 crc kubenswrapper[5103]: I0130 00:10:36.747874 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.494941 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496481 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496562 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496582 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.496628 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.507816 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.749086 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.836436 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.836776 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.838178 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.838276 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.838290 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.838857 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:37 crc kubenswrapper[5103]: I0130 00:10:37.839271 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.839512 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:37 crc kubenswrapper[5103]: E0130 00:10:37.847767 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:37.839474018 +0000 UTC m=+27.710972080,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:38 crc kubenswrapper[5103]: E0130 00:10:38.016710 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:38 crc kubenswrapper[5103]: I0130 00:10:38.748509 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:39 crc kubenswrapper[5103]: I0130 00:10:39.747596 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.068874 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.329643 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.393632 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:40 crc kubenswrapper[5103]: I0130 00:10:40.748573 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:40 crc kubenswrapper[5103]: E0130 00:10:40.940316 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:41 crc kubenswrapper[5103]: I0130 00:10:41.749322 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:42 crc kubenswrapper[5103]: E0130 00:10:42.275866 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:10:42 crc kubenswrapper[5103]: I0130 00:10:42.748404 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.023988 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.024669 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.026242 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.026338 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.026358 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:43 crc kubenswrapper[5103]: E0130 00:10:43.027117 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.027640 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:43 crc kubenswrapper[5103]: E0130 00:10:43.028041 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:43 crc kubenswrapper[5103]: E0130 00:10:43.036465 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:43.027976599 +0000 UTC m=+32.899474691,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:43 crc kubenswrapper[5103]: I0130 00:10:43.748581 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.508616 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510321 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510407 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.510443 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:44 crc kubenswrapper[5103]: E0130 00:10:44.528772 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:44 crc kubenswrapper[5103]: I0130 00:10:44.748395 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:45 crc kubenswrapper[5103]: I0130 00:10:45.748919 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:46 crc kubenswrapper[5103]: I0130 00:10:46.748545 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:47 crc kubenswrapper[5103]: E0130 00:10:47.401999 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:47 crc kubenswrapper[5103]: I0130 00:10:47.748898 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:48 crc kubenswrapper[5103]: I0130 00:10:48.748298 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:49 crc kubenswrapper[5103]: I0130 00:10:49.748885 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:50 crc kubenswrapper[5103]: I0130 00:10:50.749529 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:50 crc kubenswrapper[5103]: E0130 00:10:50.941362 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.529102 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530416 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530464 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530483 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.530521 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:51 crc kubenswrapper[5103]: E0130 00:10:51.544371 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:51 crc kubenswrapper[5103]: I0130 00:10:51.747764 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:52 crc kubenswrapper[5103]: I0130 00:10:52.749394 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:53 crc kubenswrapper[5103]: I0130 00:10:53.748808 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:54 crc kubenswrapper[5103]: E0130 00:10:54.410912 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:10:54 crc kubenswrapper[5103]: I0130 00:10:54.747882 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.747619 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.867758 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.869185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.869264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.869284 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:55 crc kubenswrapper[5103]: E0130 00:10:55.869895 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:55 crc kubenswrapper[5103]: I0130 00:10:55.870438 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:55 crc kubenswrapper[5103]: E0130 00:10:55.880399 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b55d68c6d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b55d68c6d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.070994645 +0000 UTC m=+3.942492697,LastTimestamp:2026-01-30 00:10:55.872248784 +0000 UTC m=+45.743746876,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:56 crc kubenswrapper[5103]: E0130 00:10:56.097144 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56c34dc06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56c34dc06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.319250438 +0000 UTC m=+4.190748490,LastTimestamp:2026-01-30 00:10:56.091537438 +0000 UTC m=+45.963035520,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:56 crc kubenswrapper[5103]: I0130 00:10:56.105615 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:56 crc kubenswrapper[5103]: I0130 00:10:56.108511 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915"} Jan 30 00:10:56 crc kubenswrapper[5103]: E0130 00:10:56.109313 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59b56d089716\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59b56d089716 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:14.333126422 +0000 UTC m=+4.204624474,LastTimestamp:2026-01-30 00:10:56.104385141 +0000 UTC m=+45.975883193,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:56 crc kubenswrapper[5103]: I0130 00:10:56.748821 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:56 crc kubenswrapper[5103]: E0130 00:10:56.759995 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 30 00:10:57 crc kubenswrapper[5103]: E0130 00:10:57.083970 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.110339 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.110999 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.111076 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.111090 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:57 crc kubenswrapper[5103]: E0130 00:10:57.111518 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:57 crc kubenswrapper[5103]: I0130 00:10:57.748650 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.115480 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.116746 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.119587 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" exitCode=255 Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.119659 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915"} Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.119720 5103 scope.go:117] "RemoveContainer" containerID="ac3cc1169a7978ed9a0cf215e3466e2009a1fd38d153db5741f647ea5effd14b" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120010 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120850 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120916 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.120941 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.121651 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.122514 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.122935 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.131467 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:10:58.12286286 +0000 UTC m=+47.994360952,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.544703 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546014 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546453 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.546602 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.561924 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:10:58 crc kubenswrapper[5103]: I0130 00:10:58.749181 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:10:58 crc kubenswrapper[5103]: E0130 00:10:58.884129 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 30 00:10:59 crc kubenswrapper[5103]: I0130 00:10:59.126243 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:10:59 crc kubenswrapper[5103]: I0130 00:10:59.747723 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:00 crc kubenswrapper[5103]: I0130 00:11:00.748201 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:00 crc kubenswrapper[5103]: E0130 00:11:00.942188 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:01 crc kubenswrapper[5103]: E0130 00:11:01.419500 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:01 crc kubenswrapper[5103]: I0130 00:11:01.748279 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:02 crc kubenswrapper[5103]: I0130 00:11:02.750880 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:03 crc kubenswrapper[5103]: E0130 00:11:03.533442 5103 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 30 00:11:03 crc kubenswrapper[5103]: I0130 00:11:03.747883 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:04 crc kubenswrapper[5103]: I0130 00:11:04.747016 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.562358 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565004 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565081 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565097 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.565134 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:05 crc kubenswrapper[5103]: E0130 00:11:05.579771 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.747130 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.951210 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.951536 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.952449 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.952504 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:05 crc kubenswrapper[5103]: I0130 00:11:05.952515 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:05 crc kubenswrapper[5103]: E0130 00:11:05.952893 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:06 crc kubenswrapper[5103]: I0130 00:11:06.744810 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.110769 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.111679 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.112871 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.112954 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.112966 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.113457 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.113754 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.114040 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.121978 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:11:07.114005336 +0000 UTC m=+56.985503388,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.747035 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.836030 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.836463 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.837744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.837810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.837830 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.838657 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:07 crc kubenswrapper[5103]: I0130 00:11:07.839207 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.839568 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:07 crc kubenswrapper[5103]: E0130 00:11:07.846706 5103 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188f59ba0355b34a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188f59ba0355b34a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:10:34.034631498 +0000 UTC m=+23.906129550,LastTimestamp:2026-01-30 00:11:07.839508578 +0000 UTC m=+57.711006660,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:11:08 crc kubenswrapper[5103]: E0130 00:11:08.428578 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:08 crc kubenswrapper[5103]: I0130 00:11:08.746340 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:09 crc kubenswrapper[5103]: I0130 00:11:09.749271 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:10 crc kubenswrapper[5103]: I0130 00:11:10.748456 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:10 crc kubenswrapper[5103]: E0130 00:11:10.943416 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:11 crc kubenswrapper[5103]: I0130 00:11:11.746982 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.580417 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.582770 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.582985 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.583209 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.583413 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:12 crc kubenswrapper[5103]: E0130 00:11:12.600373 5103 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 30 00:11:12 crc kubenswrapper[5103]: I0130 00:11:12.747140 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:13 crc kubenswrapper[5103]: I0130 00:11:13.745999 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:14 crc kubenswrapper[5103]: I0130 00:11:14.742760 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:15 crc kubenswrapper[5103]: E0130 00:11:15.433727 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 30 00:11:15 crc kubenswrapper[5103]: I0130 00:11:15.746441 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:16 crc kubenswrapper[5103]: I0130 00:11:16.748319 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:17 crc kubenswrapper[5103]: I0130 00:11:17.745714 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.748720 5103 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.868229 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.870201 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.870248 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:18 crc kubenswrapper[5103]: I0130 00:11:18.870261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:18 crc kubenswrapper[5103]: E0130 00:11:18.870540 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.049715 5103 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dpv7j" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.057021 5103 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dpv7j" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.142865 5103 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.570140 5103 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.600876 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602266 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602399 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602487 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.602701 5103 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.614697 5103 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.615018 5103 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.615036 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.618888 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619009 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619139 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619273 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.619393 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.635229 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643148 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643198 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643208 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643227 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.643238 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.653549 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.661983 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662032 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662064 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662086 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.662099 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.672702 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679681 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679721 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679733 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679753 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:19 crc kubenswrapper[5103]: I0130 00:11:19.679767 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:19Z","lastTransitionTime":"2026-01-30T00:11:19Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.691327 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:19Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.691519 5103 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.691545 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.791962 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.892279 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:19 crc kubenswrapper[5103]: E0130 00:11:19.993427 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: I0130 00:11:20.058951 5103 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-01 00:06:19 +0000 UTC" deadline="2026-02-25 23:44:29.01549579 +0000 UTC" Jan 30 00:11:20 crc kubenswrapper[5103]: I0130 00:11:20.058999 5103 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="647h33m8.956500012s" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.093930 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.194214 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.295169 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.396269 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.497336 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.597722 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.698034 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.798589 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.899578 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:20 crc kubenswrapper[5103]: E0130 00:11:20.944452 5103 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.000284 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.100352 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.201081 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.301518 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.401661 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.502470 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.603481 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.704129 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.804839 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:21 crc kubenswrapper[5103]: E0130 00:11:21.905764 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.006280 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.107247 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.207502 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.307973 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.408378 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.508488 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.609594 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.710659 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.811383 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.868259 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.869444 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.869517 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.869540 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.870201 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:22 crc kubenswrapper[5103]: I0130 00:11:22.870703 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:22 crc kubenswrapper[5103]: E0130 00:11:22.917019 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.017981 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.118965 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.219223 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.320249 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.421449 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.522401 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.622910 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.723119 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.824258 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:23 crc kubenswrapper[5103]: E0130 00:11:23.924717 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.024880 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.125262 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.216354 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.217941 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b"} Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218157 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218711 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:24 crc kubenswrapper[5103]: I0130 00:11:24.218755 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.219144 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.225976 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.326509 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.426763 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.527941 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.628397 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.728904 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.829394 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:24 crc kubenswrapper[5103]: E0130 00:11:24.930136 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.030723 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.131759 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.222652 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.223258 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225495 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" exitCode=255 Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b"} Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225598 5103 scope.go:117] "RemoveContainer" containerID="4b59d73e18557cc04bb68857bf849ae69450c2c36a52a726fcddb19b74dc7915" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.225787 5103 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.226504 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.226537 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.226550 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.227030 5103 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 30 00:11:25 crc kubenswrapper[5103]: I0130 00:11:25.227293 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.227571 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.232209 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.332525 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.432887 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.533338 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.633477 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.734586 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.835252 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:25 crc kubenswrapper[5103]: E0130 00:11:25.936083 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.036829 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.137293 5103 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.180100 5103 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.229728 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239385 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239419 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.239432 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.273709 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.290910 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341667 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341718 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341731 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341753 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.341766 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.389710 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444477 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444559 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444579 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.444628 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.490248 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546880 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546929 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.546992 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.590332 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656851 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656879 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.656890 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759296 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759349 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759413 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.759463 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.765934 5103 apiserver.go:52] "Watching apiserver" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.773789 5103 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.774441 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-dns/node-resolver-bs8rz","openshift-multus/multus-additional-cni-plugins-6tmbq","openshift-multus/multus-swfns","openshift-multus/network-metrics-daemon-vsrcq","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6","openshift-image-registry/node-ca-226mj","openshift-machine-config-operator/machine-config-daemon-6g6hp","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-8lwjf","openshift-etcd/etcd-crc"] Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.775677 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.776300 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.776442 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.777770 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.777848 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.777982 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.779714 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.779810 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.780812 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.784100 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.784293 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.784758 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.785152 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.785357 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.785546 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.786907 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.795329 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.806135 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.826568 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.837038 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.845880 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.856903 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862741 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862796 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862829 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.862841 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.867331 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.898914 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899018 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.899173 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899227 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-hosts-file\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.899280 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.399259691 +0000 UTC m=+77.270757733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899312 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899347 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-tmp-dir\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899382 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899403 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899434 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899495 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899522 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899551 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899579 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899604 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899652 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899703 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899741 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.899781 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.900327 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.900403 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.400388238 +0000 UTC m=+77.271886290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915020 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915089 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915105 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.915246 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.415222128 +0000 UTC m=+77.286720180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.934671 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.935001 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.935123 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: E0130 00:11:26.935303 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.435278365 +0000 UTC m=+77.306776427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.965908 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966323 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966446 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966546 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.966644 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:26Z","lastTransitionTime":"2026-01-30T00:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.971981 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.972798 5103 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.974231 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.977366 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.980007 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.980117 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.980342 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.981242 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:26 crc kubenswrapper[5103]: I0130 00:11:26.981680 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000268 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000587 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000706 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-hosts-file\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000802 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-tmp-dir\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.000903 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001198 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001222 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-hosts-file\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001232 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.001721 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-tmp-dir\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019001 5103 projected.go:289] Couldn't get configMap openshift-dns/kube-root-ca.crt: object "openshift-dns"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019033 5103 projected.go:289] Couldn't get configMap openshift-dns/openshift-service-ca.crt: object "openshift-dns"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019067 5103 projected.go:194] Error preparing data for projected volume kube-api-access-89lmd for pod openshift-dns/node-resolver-bs8rz: [object "openshift-dns"/"kube-root-ca.crt" not registered, object "openshift-dns"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.019149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd podName:ef3f9074-af3f-43f4-ad74-efe1ba4abc8e nodeName:}" failed. No retries permitted until 2026-01-30 00:11:27.519125711 +0000 UTC m=+77.390623763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-89lmd" (UniqueName: "kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd") pod "node-resolver-bs8rz" (UID: "ef3f9074-af3f-43f4-ad74-efe1ba4abc8e") : [object "openshift-dns"/"kube-root-ca.crt" not registered, object "openshift-dns"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069443 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069489 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069500 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069515 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.069525 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.094682 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.104709 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 30 00:11:27 crc kubenswrapper[5103]: W0130 00:11:27.115543 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669 WatchSource:0}: Error finding container 66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669: Status 404 returned error can't find the container with id 66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669 Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.119533 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:27 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: source /etc/kubernetes/apiserver-url.env Jan 30 00:11:27 crc kubenswrapper[5103]: else Jan 30 00:11:27 crc kubenswrapper[5103]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:11:27 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.120703 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172095 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172171 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172184 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172201 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.172213 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.208145 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:27 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:11:27 crc kubenswrapper[5103]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:11:27 crc kubenswrapper[5103]: ho_enable="--enable-hybrid-overlay" Jan 30 00:11:27 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:11:27 crc kubenswrapper[5103]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:11:27 crc kubenswrapper[5103]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:11:27 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:27 crc kubenswrapper[5103]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --webhook-host=127.0.0.1 \ Jan 30 00:11:27 crc kubenswrapper[5103]: --webhook-port=9743 \ Jan 30 00:11:27 crc kubenswrapper[5103]: ${ho_enable} \ Jan 30 00:11:27 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:27 crc kubenswrapper[5103]: --disable-approver \ Jan 30 00:11:27 crc kubenswrapper[5103]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --wait-for-kubernetes-api=200s \ Jan 30 00:11:27 crc kubenswrapper[5103]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.210713 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:27 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:11:27 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:27 crc kubenswrapper[5103]: --disable-webhook \ Jan 30 00:11:27 crc kubenswrapper[5103]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:11:27 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.211915 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.273139 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274420 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274560 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274592 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274618 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.274637 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: W0130 00:11:27.286039 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115 WatchSource:0}: Error finding container facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115: Status 404 returned error can't find the container with id facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115 Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.289419 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.290650 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.302702 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.302858 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.302945 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.307646 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.307809 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.307858 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.319105 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.329663 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.341555 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.352936 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.366998 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377015 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377100 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377127 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377152 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377174 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.377866 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.391591 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.402172 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.405929 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-multus\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.405993 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-system-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406015 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-kubelet\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406067 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406088 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-netns\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406109 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-socket-dir-parent\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406133 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-k8s-cni-cncf-io\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406152 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-bin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406172 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-hostroot\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406195 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-multus-certs\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406231 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-etc-kubernetes\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406263 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-os-release\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406327 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406351 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406371 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406393 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-cnibin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406412 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-conf-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.406430 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.406547 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.406598 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.406581336 +0000 UTC m=+78.278079398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.406984 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.407027 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.407017667 +0000 UTC m=+78.278515729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.409957 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.416760 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.424460 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.431983 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.441042 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479281 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479332 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479343 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479363 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.479375 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511667 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-k8s-cni-cncf-io\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511778 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-bin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511841 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-hostroot\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511784 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-k8s-cni-cncf-io\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511880 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-multus-certs\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511960 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-hostroot\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511971 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-bin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511994 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-etc-kubernetes\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.511965 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-etc-kubernetes\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512119 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-multus-certs\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512194 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512732 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-os-release\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512895 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.512902 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-os-release\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.512927 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.512964 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513003 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513117 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513127 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513158 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513178 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513185 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513231 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513267 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.513242946 +0000 UTC m=+78.384741028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513304 5103 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: object "openshift-multus"/"multus-daemon-config" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513323 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-cnibin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513366 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-conf-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513372 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.513349958 +0000 UTC m=+78.384848030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513416 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513507 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-multus\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513593 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-system-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513599 5103 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: object "openshift-multus"/"cni-copy-resources" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513624 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-kubelet\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513681 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy podName:a7dd7e02-4357-4643-8c23-2fb57ba70405 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.013655906 +0000 UTC m=+77.885153998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy") pod "multus-swfns" (UID: "a7dd7e02-4357-4643-8c23-2fb57ba70405") : object "openshift-multus"/"cni-copy-resources" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513700 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-kubelet\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513761 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-var-lib-cni-multus\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513813 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-netns\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513856 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-socket-dir-parent\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513859 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-cnibin\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513880 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513924 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-conf-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513926 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-socket-dir-parent\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.513958 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config podName:a7dd7e02-4357-4643-8c23-2fb57ba70405 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.013935672 +0000 UTC m=+77.885433764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config") pod "multus-swfns" (UID: "a7dd7e02-4357-4643-8c23-2fb57ba70405") : object "openshift-multus"/"multus-daemon-config" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.513998 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-system-cni-dir\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.514084 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7dd7e02-4357-4643-8c23-2fb57ba70405-host-run-netns\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530080 5103 projected.go:289] Couldn't get configMap openshift-multus/kube-root-ca.crt: object "openshift-multus"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530130 5103 projected.go:289] Couldn't get configMap openshift-multus/openshift-service-ca.crt: object "openshift-multus"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530147 5103 projected.go:194] Error preparing data for projected volume kube-api-access-4t7t4 for pod openshift-multus/multus-swfns: [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.530231 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4 podName:a7dd7e02-4357-4643-8c23-2fb57ba70405 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.030208097 +0000 UTC m=+77.901706209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4t7t4" (UniqueName: "kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4") pod "multus-swfns" (UID: "a7dd7e02-4357-4643-8c23-2fb57ba70405") : [object "openshift-multus"/"kube-root-ca.crt" not registered, object "openshift-multus"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581627 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581759 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581779 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.581863 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.615225 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.620858 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-swfns" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.621038 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-89lmd\" (UniqueName: \"kubernetes.io/projected/ef3f9074-af3f-43f4-ad74-efe1ba4abc8e-kube-api-access-89lmd\") pod \"node-resolver-bs8rz\" (UID: \"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\") " pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.621720 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bs8rz" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.625695 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626087 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626140 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626217 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.626302 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.641564 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.655112 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:27 crc kubenswrapper[5103]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:27 crc kubenswrapper[5103]: set -uo pipefail Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:11:27 crc kubenswrapper[5103]: HOSTS_FILE="/etc/hosts" Jan 30 00:11:27 crc kubenswrapper[5103]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:11:27 crc kubenswrapper[5103]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:11:27 crc kubenswrapper[5103]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:11:27 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: while true; do Jan 30 00:11:27 crc kubenswrapper[5103]: declare -A svc_ips Jan 30 00:11:27 crc kubenswrapper[5103]: for svc in "${services[@]}"; do Jan 30 00:11:27 crc kubenswrapper[5103]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:11:27 crc kubenswrapper[5103]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:11:27 crc kubenswrapper[5103]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:11:27 crc kubenswrapper[5103]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:11:27 crc kubenswrapper[5103]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:27 crc kubenswrapper[5103]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:27 crc kubenswrapper[5103]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:27 crc kubenswrapper[5103]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:11:27 crc kubenswrapper[5103]: for i in ${!cmds[*]} Jan 30 00:11:27 crc kubenswrapper[5103]: do Jan 30 00:11:27 crc kubenswrapper[5103]: ips=($(eval "${cmds[i]}")) Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:11:27 crc kubenswrapper[5103]: break Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:11:27 crc kubenswrapper[5103]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:11:27 crc kubenswrapper[5103]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:11:27 crc kubenswrapper[5103]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:11:27 crc kubenswrapper[5103]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:11:27 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:27 crc kubenswrapper[5103]: continue Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # Append resolver entries for services Jan 30 00:11:27 crc kubenswrapper[5103]: rc=0 Jan 30 00:11:27 crc kubenswrapper[5103]: for svc in "${!svc_ips[@]}"; do Jan 30 00:11:27 crc kubenswrapper[5103]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:11:27 crc kubenswrapper[5103]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: if [[ $rc -ne 0 ]]; then Jan 30 00:11:27 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:27 crc kubenswrapper[5103]: continue Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: Jan 30 00:11:27 crc kubenswrapper[5103]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:11:27 crc kubenswrapper[5103]: # Replace /etc/hosts with our modified version if needed Jan 30 00:11:27 crc kubenswrapper[5103]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:11:27 crc kubenswrapper[5103]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:11:27 crc kubenswrapper[5103]: fi Jan 30 00:11:27 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:27 crc kubenswrapper[5103]: unset svc_ips Jan 30 00:11:27 crc kubenswrapper[5103]: done Jan 30 00:11:27 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89lmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bs8rz_openshift-dns(ef3f9074-af3f-43f4-ad74-efe1ba4abc8e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:27 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.656339 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bs8rz" podUID="ef3f9074-af3f-43f4-ad74-efe1ba4abc8e" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.656337 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.670160 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685249 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685347 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685380 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.685405 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.687597 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.702682 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716329 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37f6985e-a0c9-43c8-a1bc-00f85204425f-rootfs\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716583 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716729 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.716756 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.719026 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.737734 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.753165 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788443 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788501 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788514 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788538 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.788555 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.817893 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37f6985e-a0c9-43c8-a1bc-00f85204425f-rootfs\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818007 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818013 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/37f6985e-a0c9-43c8-a1bc-00f85204425f-rootfs\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818099 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.818127 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818236 5103 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: object "openshift-machine-config-operator"/"proxy-tls" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls podName:37f6985e-a0c9-43c8-a1bc-00f85204425f nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.318289821 +0000 UTC m=+78.189787873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls") pod "machine-config-daemon-6g6hp" (UID: "37f6985e-a0c9-43c8-a1bc-00f85204425f") : object "openshift-machine-config-operator"/"proxy-tls" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818314 5103 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: object "openshift-machine-config-operator"/"kube-rbac-proxy" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.818447 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config podName:37f6985e-a0c9-43c8-a1bc-00f85204425f nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.318422655 +0000 UTC m=+78.189920727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config") pod "machine-config-daemon-6g6hp" (UID: "37f6985e-a0c9-43c8-a1bc-00f85204425f") : object "openshift-machine-config-operator"/"kube-rbac-proxy" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834435 5103 projected.go:289] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834487 5103 projected.go:289] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834502 5103 projected.go:194] Error preparing data for projected volume kube-api-access-jtw8v for pod openshift-machine-config-operator/machine-config-daemon-6g6hp: [object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered, object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: E0130 00:11:27.834582 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v podName:37f6985e-a0c9-43c8-a1bc-00f85204425f nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.334559306 +0000 UTC m=+78.206057358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jtw8v" (UniqueName: "kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v") pod "machine-config-daemon-6g6hp" (UID: "37f6985e-a0c9-43c8-a1bc-00f85204425f") : [object "openshift-machine-config-operator"/"kube-root-ca.crt" not registered, object "openshift-machine-config-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.867854 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872098 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872807 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872858 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.872874 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.874925 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.883452 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890100 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890172 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890191 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.890204 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.893939 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.894386 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.896305 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.896692 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.896984 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897289 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897588 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897771 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.897991 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898225 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898313 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898479 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.898833 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919004 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-host\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919071 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919099 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq678\" (UniqueName: \"kubernetes.io/projected/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-kube-api-access-sq678\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919124 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919355 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-serviceca\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919754 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.919899 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.921112 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.929900 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.940631 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.950499 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.960154 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.970619 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.978836 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.989889 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992014 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992057 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992066 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992082 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:27 crc kubenswrapper[5103]: I0130 00:11:27.992093 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:27Z","lastTransitionTime":"2026-01-30T00:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.002579 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.012763 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.020763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.020973 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-serviceca\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021087 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021227 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021853 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxsl\" (UniqueName: \"kubernetes.io/projected/566ee5b2-938f-41f6-8625-e8a987181d60-kube-api-access-zxxsl\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021987 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022118 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021886 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.021808 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022436 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-serviceca\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022450 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022550 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-host\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022621 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022672 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sq678\" (UniqueName: \"kubernetes.io/projected/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-kube-api-access-sq678\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.022709 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-host\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.023095 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-cni-binary-copy\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.023364 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a7dd7e02-4357-4643-8c23-2fb57ba70405-multus-daemon-config\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.024455 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.033502 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.035533 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.036843 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.036974 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.039779 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq678\" (UniqueName: \"kubernetes.io/projected/a0b75391-d8bb-4610-a69e-1f5c3a4e4eef-kube-api-access-sq678\") pod \"node-ca-226mj\" (UID: \"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\") " pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.042512 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.045396 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"ovnkube-control-plane-57b78d8988-k7mv6\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.051231 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.060467 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.068320 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.077068 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.087630 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095083 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095127 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095137 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.095168 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.098315 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.105599 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.113923 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.122101 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123220 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123288 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-system-cni-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123351 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxsl\" (UniqueName: \"kubernetes.io/projected/566ee5b2-938f-41f6-8625-e8a987181d60-kube-api-access-zxxsl\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123439 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-os-release\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.123491 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123503 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cnibin\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123724 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.123787 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.623763047 +0000 UTC m=+78.495261109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.123960 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124094 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-binary-copy\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124209 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124344 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.124420 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4b7\" (UniqueName: \"kubernetes.io/projected/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-kube-api-access-bf4b7\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.131271 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7t4\" (UniqueName: \"kubernetes.io/projected/a7dd7e02-4357-4643-8c23-2fb57ba70405-kube-api-access-4t7t4\") pod \"multus-swfns\" (UID: \"a7dd7e02-4357-4643-8c23-2fb57ba70405\") " pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.133078 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.141918 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxsl\" (UniqueName: \"kubernetes.io/projected/566ee5b2-938f-41f6-8625-e8a987181d60-kube-api-access-zxxsl\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.163025 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.172253 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.180747 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.187598 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.194716 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197512 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197557 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197572 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197592 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.197604 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.208019 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.210956 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-226mj" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.217440 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.220550 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225643 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4b7\" (UniqueName: \"kubernetes.io/projected/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-kube-api-access-bf4b7\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225759 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-system-cni-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225815 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-os-release\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225871 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cnibin\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225901 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225942 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-binary-copy\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225973 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225977 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-system-cni-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.225999 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226124 5103 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226224 5103 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-flatfile-config: object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226245 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist podName:2ed60012-d4e8-45fd-b124-fe7d6ca49ca0 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.726219625 +0000 UTC m=+78.597717677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-6tmbq" (UID: "2ed60012-d4e8-45fd-b124-fe7d6ca49ca0") : object "openshift-multus"/"default-cni-sysctl-allowlist" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226264 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cnibin\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.226326 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap podName:2ed60012-d4e8-45fd-b124-fe7d6ca49ca0 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:28.726298327 +0000 UTC m=+78.597796379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whereabouts-flatfile-configmap" (UniqueName: "kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap") pod "multus-additional-cni-plugins-6tmbq" (UID: "2ed60012-d4e8-45fd-b124-fe7d6ca49ca0") : object "openshift-multus"/"whereabouts-flatfile-config" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226186 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-os-release\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226384 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.226929 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-binary-copy\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.241283 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b75391_d8bb_4610_a69e_1f5c3a4e4eef.slice/crio-48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530 WatchSource:0}: Error finding container 48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530: Status 404 returned error can't find the container with id 48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.244842 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:11:28 crc kubenswrapper[5103]: while [ true ]; Jan 30 00:11:28 crc kubenswrapper[5103]: do Jan 30 00:11:28 crc kubenswrapper[5103]: for f in $(ls /tmp/serviceca); do Jan 30 00:11:28 crc kubenswrapper[5103]: echo $f Jan 30 00:11:28 crc kubenswrapper[5103]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:11:28 crc kubenswrapper[5103]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:11:28 crc kubenswrapper[5103]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:11:28 crc kubenswrapper[5103]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:28 crc kubenswrapper[5103]: else Jan 30 00:11:28 crc kubenswrapper[5103]: mkdir $reg_dir_path Jan 30 00:11:28 crc kubenswrapper[5103]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:11:28 crc kubenswrapper[5103]: echo $d Jan 30 00:11:28 crc kubenswrapper[5103]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:11:28 crc kubenswrapper[5103]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:11:28 crc kubenswrapper[5103]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: rm -rf /etc/docker/certs.d/$d Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait ${!} Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-226mj_openshift-image-registry(a0b75391-d8bb-4610-a69e-1f5c3a4e4eef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.245117 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:28 crc kubenswrapper[5103]: set -euo pipefail Jan 30 00:11:28 crc kubenswrapper[5103]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:11:28 crc kubenswrapper[5103]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:11:28 crc kubenswrapper[5103]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:11:28 crc kubenswrapper[5103]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:11:28 crc kubenswrapper[5103]: TS=$(date +%s) Jan 30 00:11:28 crc kubenswrapper[5103]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:11:28 crc kubenswrapper[5103]: HAS_LOGGED_INFO=0 Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: log_missing_certs(){ Jan 30 00:11:28 crc kubenswrapper[5103]: CUR_TS=$(date +%s) Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:11:28 crc kubenswrapper[5103]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:11:28 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:11:28 crc kubenswrapper[5103]: HAS_LOGGED_INFO=1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: } Jan 30 00:11:28 crc kubenswrapper[5103]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:11:28 crc kubenswrapper[5103]: log_missing_certs Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 5 Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:11:28 crc kubenswrapper[5103]: --logtostderr \ Jan 30 00:11:28 crc kubenswrapper[5103]: --secure-listen-address=:9108 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:11:28 crc kubenswrapper[5103]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:11:28 crc kubenswrapper[5103]: --tls-cert-file=${TLS_CERT} Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.246283 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-226mj" podUID="a0b75391-d8bb-4610-a69e-1f5c3a4e4eef" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.249150 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4b7\" (UniqueName: \"kubernetes.io/projected/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-kube-api-access-bf4b7\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.251285 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-swfns" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.254955 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:28 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # This is needed so that converting clusters from GA to TP Jan 30 00:11:28 crc kubenswrapper[5103]: # will rollout control plane pods as well Jan 30 00:11:28 crc kubenswrapper[5103]: network_segmentation_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" != "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: route_advertisements_enable_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_policy_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:11:28 crc kubenswrapper[5103]: admin_network_policy_enabled_flag= Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: if [ "shared" == "shared" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:11:28 crc kubenswrapper[5103]: elif [ "shared" == "local" ]; then Jan 30 00:11:28 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode local" Jan 30 00:11:28 crc kubenswrapper[5103]: else Jan 30 00:11:28 crc kubenswrapper[5103]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:11:28 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/ovnkube \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:28 crc kubenswrapper[5103]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:11:28 crc kubenswrapper[5103]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --metrics-enable-pprof \ Jan 30 00:11:28 crc kubenswrapper[5103]: --metrics-enable-config-duration \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${persistent_ips_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${multi_network_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${network_segmentation_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${gateway_mode_flags} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${route_advertisements_enable_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-ip=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-firewall=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-qos=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-egress-service=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-multicast \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-multi-external-gateway=true \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${multi_network_policy_enabled_flag} \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${admin_network_policy_enabled_flag} Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.256145 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.262355 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7dd7e02_4357_4643_8c23_2fb57ba70405.slice/crio-51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1 WatchSource:0}: Error finding container 51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1: Status 404 returned error can't find the container with id 51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.265274 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:11:28 crc kubenswrapper[5103]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:11:28 crc kubenswrapper[5103]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-swfns_openshift-multus(a7dd7e02-4357-4643-8c23-2fb57ba70405): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.268430 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-swfns" podUID="a7dd7e02-4357-4643-8c23-2fb57ba70405" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299660 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299719 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299788 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.299812 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.300201 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.324726 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327361 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327390 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327449 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327499 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.327649 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.329170 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37f6985e-a0c9-43c8-a1bc-00f85204425f-mcd-auth-proxy-config\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.333778 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37f6985e-a0c9-43c8-a1bc-00f85204425f-proxy-tls\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.336496 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.346623 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.354097 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.363336 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.372224 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.379980 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401751 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401808 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401818 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401835 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.401847 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.411575 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428400 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428617 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428671 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428707 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428747 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.428758 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428799 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.428862 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.428839954 +0000 UTC m=+80.300338006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.428950 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429008 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429032 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429168 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429212 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429266 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429290 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429338 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429364 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.429369 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429423 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.429447 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.429427148 +0000 UTC m=+80.300925210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429491 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429566 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429621 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429665 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429694 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429713 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.429797 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.434347 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtw8v\" (UniqueName: \"kubernetes.io/projected/37f6985e-a0c9-43c8-a1bc-00f85204425f-kube-api-access-jtw8v\") pod \"machine-config-daemon-6g6hp\" (UID: \"37f6985e-a0c9-43c8-a1bc-00f85204425f\") " pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.452708 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.470897 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.472943 5103 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.472962 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.473192 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486394 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"66f54f17bd1778a867b5d05e7ea42192333e01fca48ef1d056193b6b41ff0669"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486446 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d61d6c3995a503b66feabc08d51a197a4ec103a2e9d6df32ab81ca26927ce79c"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486467 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486479 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerStarted","Data":"578d2296c0b9b147f002bab00ce887ae174a1dfc57c08f5d70b218ff4df99c74"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486490 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-226mj" event={"ID":"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef","Type":"ContainerStarted","Data":"48cb9a58d31bf42fa131e5a935c5c0a6958e3b9e8c2227b25fd03f0922daf530"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486500 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bs8rz" event={"ID":"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e","Type":"ContainerStarted","Data":"1d888c4fadd263fbfa5894c72b0570a279483acc20df2628e05b5ba47677c065"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486511 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"facedfe7e0e5c2c71cd8e1a3860238e99b275e537057c358365f36e3730d7115"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.486846 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.487004 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.487083 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.487150 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.488057 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.488358 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:28 crc kubenswrapper[5103]: set -uo pipefail Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 30 00:11:28 crc kubenswrapper[5103]: HOSTS_FILE="/etc/hosts" Jan 30 00:11:28 crc kubenswrapper[5103]: TEMP_FILE="/tmp/hosts.tmp" Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Make a temporary file with the old hosts file's attributes. Jan 30 00:11:28 crc kubenswrapper[5103]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 30 00:11:28 crc kubenswrapper[5103]: echo "Failed to preserve hosts file. Exiting." Jan 30 00:11:28 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: while true; do Jan 30 00:11:28 crc kubenswrapper[5103]: declare -A svc_ips Jan 30 00:11:28 crc kubenswrapper[5103]: for svc in "${services[@]}"; do Jan 30 00:11:28 crc kubenswrapper[5103]: # Fetch service IP from cluster dns if present. We make several tries Jan 30 00:11:28 crc kubenswrapper[5103]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 30 00:11:28 crc kubenswrapper[5103]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 30 00:11:28 crc kubenswrapper[5103]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 30 00:11:28 crc kubenswrapper[5103]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:28 crc kubenswrapper[5103]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:28 crc kubenswrapper[5103]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 30 00:11:28 crc kubenswrapper[5103]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 30 00:11:28 crc kubenswrapper[5103]: for i in ${!cmds[*]} Jan 30 00:11:28 crc kubenswrapper[5103]: do Jan 30 00:11:28 crc kubenswrapper[5103]: ips=($(eval "${cmds[i]}")) Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: svc_ips["${svc}"]="${ips[@]}" Jan 30 00:11:28 crc kubenswrapper[5103]: break Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Update /etc/hosts only if we get valid service IPs Jan 30 00:11:28 crc kubenswrapper[5103]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 30 00:11:28 crc kubenswrapper[5103]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 30 00:11:28 crc kubenswrapper[5103]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 30 00:11:28 crc kubenswrapper[5103]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:28 crc kubenswrapper[5103]: continue Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # Append resolver entries for services Jan 30 00:11:28 crc kubenswrapper[5103]: rc=0 Jan 30 00:11:28 crc kubenswrapper[5103]: for svc in "${!svc_ips[@]}"; do Jan 30 00:11:28 crc kubenswrapper[5103]: for ip in ${svc_ips[${svc}]}; do Jan 30 00:11:28 crc kubenswrapper[5103]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ $rc -ne 0 ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:28 crc kubenswrapper[5103]: continue Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 30 00:11:28 crc kubenswrapper[5103]: # Replace /etc/hosts with our modified version if needed Jan 30 00:11:28 crc kubenswrapper[5103]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 30 00:11:28 crc kubenswrapper[5103]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: sleep 60 & wait Jan 30 00:11:28 crc kubenswrapper[5103]: unset svc_ips Jan 30 00:11:28 crc kubenswrapper[5103]: done Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89lmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bs8rz_openshift-dns(ef3f9074-af3f-43f4-ad74-efe1ba4abc8e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.489319 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.489420 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bs8rz" podUID="ef3f9074-af3f-43f4-ad74-efe1ba4abc8e" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.490119 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:28 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 30 00:11:28 crc kubenswrapper[5103]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 30 00:11:28 crc kubenswrapper[5103]: ho_enable="--enable-hybrid-overlay" Jan 30 00:11:28 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 30 00:11:28 crc kubenswrapper[5103]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 30 00:11:28 crc kubenswrapper[5103]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --webhook-host=127.0.0.1 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --webhook-port=9743 \ Jan 30 00:11:28 crc kubenswrapper[5103]: ${ho_enable} \ Jan 30 00:11:28 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:28 crc kubenswrapper[5103]: --disable-approver \ Jan 30 00:11:28 crc kubenswrapper[5103]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --wait-for-kubernetes-api=200s \ Jan 30 00:11:28 crc kubenswrapper[5103]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.491216 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.495128 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:28 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: Jan 30 00:11:28 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 30 00:11:28 crc kubenswrapper[5103]: --disable-webhook \ Jan 30 00:11:28 crc kubenswrapper[5103]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 30 00:11:28 crc kubenswrapper[5103]: --loglevel="${LOGLEVEL}" Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.497805 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.499222 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.499303 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:28 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:28 crc kubenswrapper[5103]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 30 00:11:28 crc kubenswrapper[5103]: source /etc/kubernetes/apiserver-url.env Jan 30 00:11:28 crc kubenswrapper[5103]: else Jan 30 00:11:28 crc kubenswrapper[5103]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 30 00:11:28 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:28 crc kubenswrapper[5103]: fi Jan 30 00:11:28 crc kubenswrapper[5103]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.500460 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504464 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504505 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504523 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504571 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.504837 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.504865 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f6985e_a0c9_43c8_a1bc_00f85204425f.slice/crio-3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3 WatchSource:0}: Error finding container 3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3: Status 404 returned error can't find the container with id 3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.507129 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.508372 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.509593 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.510916 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.529103 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530720 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530781 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530824 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530868 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530902 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.530969 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531004 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531039 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531097 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531135 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531170 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531203 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531236 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531271 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531397 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531432 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531466 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531500 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531538 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531564 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531570 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531616 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531635 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531653 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531675 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531693 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531710 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531725 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531745 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531762 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531779 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531798 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531792 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531815 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531832 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531849 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531867 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531885 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531902 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531918 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531934 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531954 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531970 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.531987 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532031 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532074 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532093 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532109 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532128 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532194 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532211 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532226 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532247 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532272 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532294 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532313 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532406 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.532454 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533014 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533118 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533161 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.533212 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:29.033184397 +0000 UTC m=+78.904682479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533261 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533303 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533313 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533433 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533465 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533522 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533554 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533613 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533641 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533695 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533723 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533840 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533867 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533926 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.533992 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534020 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534011 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534083 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534118 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534171 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534198 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534254 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534282 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534334 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534364 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534416 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534444 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534496 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534523 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534572 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534604 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.534734 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.535153 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536277 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536345 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536744 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536915 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536929 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.536089 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.535565 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537588 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537688 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537733 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537776 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537813 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537848 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537895 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537932 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537968 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538015 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538088 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538132 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538165 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538202 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538244 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538280 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538317 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538361 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538399 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538859 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539734 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540506 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537687 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540721 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540590 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537886 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540825 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.537985 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540855 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540887 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540914 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540940 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540969 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540995 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541023 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541067 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541092 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541122 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541149 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541173 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541199 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541225 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541249 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541281 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541309 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541343 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541370 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541395 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541424 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541448 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541477 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541504 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541529 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541556 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541584 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541613 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541640 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541664 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541689 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541713 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541743 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541773 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541806 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541833 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541865 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541892 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541917 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541944 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541972 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542562 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542636 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542660 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542683 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542706 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542725 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542746 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542766 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542787 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542817 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542847 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542869 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542891 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542916 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542955 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542980 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543001 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543036 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543090 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543121 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543147 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543572 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544715 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544953 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545188 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545244 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545371 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545427 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545477 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545527 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545572 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545615 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545669 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545712 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545766 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545807 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545855 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545899 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545947 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545988 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546026 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546093 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546134 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546172 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546215 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546259 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547347 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547410 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547550 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547608 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547652 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547694 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547749 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547795 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547838 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547884 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547930 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548467 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548527 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548581 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548644 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548705 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548750 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548811 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548857 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548903 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548972 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549024 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549092 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549137 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549192 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549248 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549285 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538265 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538577 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549473 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549521 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549587 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.549855 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550028 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550130 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550173 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550215 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550322 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550361 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550434 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550473 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550531 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550579 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550621 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550737 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550883 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551109 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551205 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551347 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551372 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551402 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551426 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551448 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551470 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551495 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551516 5103 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551539 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551560 5103 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551580 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551603 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551623 5103 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551642 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551748 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551773 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551794 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551820 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551841 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551864 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551887 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551909 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551930 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551951 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556503 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556587 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556625 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556662 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538732 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.538916 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539066 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539527 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539653 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556979 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557126 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557687 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557786 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557832 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.557974 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558001 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558045 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558110 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558159 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558162 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558158 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539895 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.539934 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540281 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540649 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558297 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558320 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541142 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541252 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541499 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541618 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541719 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.541778 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542212 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542024 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.558386 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542652 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542782 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.542810 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543752 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.543894 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544347 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.540713 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544646 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544734 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544892 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.544939 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545169 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545302 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545384 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545520 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545496 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545658 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545744 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.545962 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546162 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546415 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546581 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546862 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546878 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546925 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.546933 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547104 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547237 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547426 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547658 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547750 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547839 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547974 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548037 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548157 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548172 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548175 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548309 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.547412 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548370 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548829 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548880 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.548908 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550486 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550834 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550855 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550889 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.550951 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551315 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551645 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551861 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551948 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.551936 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.552432 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.552571 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.552993 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553070 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553284 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553306 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553546 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553608 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553707 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553715 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553656 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553803 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559451 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559607 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.553821 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554240 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554360 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559686 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559656 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559576 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559729 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559747 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559754 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554465 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554493 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554599 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554800 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554893 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.554932 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555172 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559721 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555341 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555539 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555647 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555749 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555875 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555973 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556211 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556276 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.555607 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556283 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556306 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556342 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.560012 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.556493 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560162 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560181 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.556602 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560211 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560225 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560267 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.560248234 +0000 UTC m=+80.431746286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.556793 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559304 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.559378 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.560378 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.560359607 +0000 UTC m=+80.431857659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561132 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561090 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561702 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.561929 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.562255 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.562939 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563085 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563171 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563707 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563814 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.563843 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.564126 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565505 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565537 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565643 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565799 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.565838 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566131 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566186 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566239 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566703 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.566847 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.567688 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.568471 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.568489 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569474 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569490 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569602 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569622 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.569724 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.570299 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.570450 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.570879 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571340 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571584 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571760 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571788 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571846 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571896 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571919 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.571957 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.572606 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.572731 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.572964 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573202 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573263 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573333 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573363 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.573850 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.574284 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.574888 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.575029 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.575345 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.575417 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580343 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580408 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580530 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.580671 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.581730 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.585856 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.586295 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.588734 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.589712 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.590033 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591342 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591463 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591580 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591766 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.591913 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592169 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592281 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592698 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592676 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.592999 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.593174 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.596208 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.607818 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"ovnkube-node-8lwjf\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612602 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612704 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612761 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612820 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.612880 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.614669 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.615023 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.622467 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.653396 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.653626 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.653700 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:29.653686012 +0000 UTC m=+79.525184064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.657349 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658198 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658397 5103 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658462 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658516 5103 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658575 5103 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658634 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658689 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658752 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658811 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.658977 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659036 5103 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659107 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659176 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659231 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659290 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659342 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659400 5103 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659452 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659508 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659563 5103 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659621 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659673 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659728 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659784 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659847 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659907 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659968 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660027 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660111 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660179 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660237 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660296 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660348 5103 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660404 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660464 5103 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660521 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660577 5103 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660647 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660712 5103 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660771 5103 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660830 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660911 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.660984 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661063 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661131 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661206 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661275 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661342 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661410 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661487 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661552 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661610 5103 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661677 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661749 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661811 5103 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661876 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.661935 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662007 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662111 5103 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662184 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662256 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662505 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662565 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662624 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662689 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662741 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662791 5103 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662843 5103 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662897 5103 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.662953 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663009 5103 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663079 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663138 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663200 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663252 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663314 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663365 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663415 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663465 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663522 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663583 5103 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663637 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663688 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663744 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663806 5103 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663860 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663921 5103 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.663977 5103 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664031 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664139 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664213 5103 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664267 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664404 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664544 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664563 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664579 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.659127 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664614 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664794 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.664914 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665728 5103 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665751 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665766 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665782 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665797 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665811 5103 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665827 5103 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665841 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665854 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665870 5103 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665886 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665903 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665920 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665934 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665947 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665960 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665972 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.665987 5103 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666000 5103 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666012 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666024 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666037 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666067 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666080 5103 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666094 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666107 5103 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666120 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666133 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666146 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666159 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666171 5103 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666185 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666200 5103 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666213 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666227 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666240 5103 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666252 5103 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666264 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666277 5103 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666291 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666303 5103 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666315 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666328 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666341 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666354 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666367 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666380 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666393 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666405 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666418 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666430 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666443 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666455 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666468 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666499 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666514 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666528 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666542 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666555 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666569 5103 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666582 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666594 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666607 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666621 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666633 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666646 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666659 5103 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666671 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666684 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666697 5103 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666712 5103 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666725 5103 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666739 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666751 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666765 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666778 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666790 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666803 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666816 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666829 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666842 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666854 5103 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666870 5103 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666884 5103 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666899 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666912 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666926 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666941 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666954 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666966 5103 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666979 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.666992 5103 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667006 5103 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667018 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667033 5103 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667044 5103 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667073 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.667085 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.695601 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715166 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715202 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715213 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715229 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.715242 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.741166 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.768439 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.768543 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.768586 5103 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.769475 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.770566 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2ed60012-d4e8-45fd-b124-fe7d6ca49ca0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-6tmbq\" (UID: \"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\") " pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.783826 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.818867 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.842957 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843008 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843021 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843041 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.843084 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.845112 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:28 crc kubenswrapper[5103]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:11:28 crc kubenswrapper[5103]: apiVersion: v1 Jan 30 00:11:28 crc kubenswrapper[5103]: clusters: Jan 30 00:11:28 crc kubenswrapper[5103]: - cluster: Jan 30 00:11:28 crc kubenswrapper[5103]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:11:28 crc kubenswrapper[5103]: server: https://api-int.crc.testing:6443 Jan 30 00:11:28 crc kubenswrapper[5103]: name: default-cluster Jan 30 00:11:28 crc kubenswrapper[5103]: contexts: Jan 30 00:11:28 crc kubenswrapper[5103]: - context: Jan 30 00:11:28 crc kubenswrapper[5103]: cluster: default-cluster Jan 30 00:11:28 crc kubenswrapper[5103]: namespace: default Jan 30 00:11:28 crc kubenswrapper[5103]: user: default-auth Jan 30 00:11:28 crc kubenswrapper[5103]: name: default-context Jan 30 00:11:28 crc kubenswrapper[5103]: current-context: default-context Jan 30 00:11:28 crc kubenswrapper[5103]: kind: Config Jan 30 00:11:28 crc kubenswrapper[5103]: preferences: {} Jan 30 00:11:28 crc kubenswrapper[5103]: users: Jan 30 00:11:28 crc kubenswrapper[5103]: - name: default-auth Jan 30 00:11:28 crc kubenswrapper[5103]: user: Jan 30 00:11:28 crc kubenswrapper[5103]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:28 crc kubenswrapper[5103]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:28 crc kubenswrapper[5103]: EOF Jan 30 00:11:28 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2mbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-8lwjf_openshift-ovn-kubernetes(b3efa2c9-9a52-46ea-b9ad-f708dd386e79): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:28 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.846259 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.855461 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.864641 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.867250 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.867368 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.870838 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.871489 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.873754 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.877068 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.881470 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.887455 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.888640 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.893930 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.900555 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.901105 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.915175 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.917567 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.923566 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.924727 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.934974 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.940692 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.941197 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.942602 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.942747 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.943274 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.946028 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.947575 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.948929 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.948967 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.948981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.949008 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.949021 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:28Z","lastTransitionTime":"2026-01-30T00:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.949511 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.952539 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: W0130 00:11:28.953200 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ed60012_d4e8_45fd_b124_fe7d6ca49ca0.slice/crio-eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349 WatchSource:0}: Error finding container eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349: Status 404 returned error can't find the container with id eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349 Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.955535 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bf4b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-6tmbq_openshift-multus(2ed60012-d4e8-45fd-b124-fe7d6ca49ca0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:28 crc kubenswrapper[5103]: E0130 00:11:28.956769 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" podUID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.973356 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.985192 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.986972 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.990676 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.992685 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 30 00:11:28 crc kubenswrapper[5103]: I0130 00:11:28.998245 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.014770 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.016362 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.017173 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.033974 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.034691 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051529 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051724 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051852 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.051990 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.052143 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.062113 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.062492 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.064224 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.068822 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.070821 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.071064 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:30.071030004 +0000 UTC m=+79.942528056 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.072239 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.073682 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.074656 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.075763 5103 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.075892 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.094094 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.106319 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.136139 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.137912 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.141689 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154897 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154961 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.154976 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.155942 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.156878 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.158843 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.160079 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.160731 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.162695 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.164524 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.166970 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.168743 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.170549 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.172172 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.176376 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.177781 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.179080 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.197558 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.198459 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.200519 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.215634 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.215828 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.253483 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"eba47102c7748ce3ffb181f4710f6ca16d50782debd82d314ec6bcbfe89c3349"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.256364 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bf4b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-6tmbq_openshift-multus(2ed60012-d4e8-45fd-b124-fe7d6ca49ca0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.257154 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259727 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259757 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259768 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.259798 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.260103 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"38221fc62e1b3d592b338664053e425c486a6c0fa3cf8ead449229dbfc4659da"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.261162 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" podUID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.263144 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 30 00:11:29 crc kubenswrapper[5103]: apiVersion: v1 Jan 30 00:11:29 crc kubenswrapper[5103]: clusters: Jan 30 00:11:29 crc kubenswrapper[5103]: - cluster: Jan 30 00:11:29 crc kubenswrapper[5103]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 30 00:11:29 crc kubenswrapper[5103]: server: https://api-int.crc.testing:6443 Jan 30 00:11:29 crc kubenswrapper[5103]: name: default-cluster Jan 30 00:11:29 crc kubenswrapper[5103]: contexts: Jan 30 00:11:29 crc kubenswrapper[5103]: - context: Jan 30 00:11:29 crc kubenswrapper[5103]: cluster: default-cluster Jan 30 00:11:29 crc kubenswrapper[5103]: namespace: default Jan 30 00:11:29 crc kubenswrapper[5103]: user: default-auth Jan 30 00:11:29 crc kubenswrapper[5103]: name: default-context Jan 30 00:11:29 crc kubenswrapper[5103]: current-context: default-context Jan 30 00:11:29 crc kubenswrapper[5103]: kind: Config Jan 30 00:11:29 crc kubenswrapper[5103]: preferences: {} Jan 30 00:11:29 crc kubenswrapper[5103]: users: Jan 30 00:11:29 crc kubenswrapper[5103]: - name: default-auth Jan 30 00:11:29 crc kubenswrapper[5103]: user: Jan 30 00:11:29 crc kubenswrapper[5103]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:29 crc kubenswrapper[5103]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 30 00:11:29 crc kubenswrapper[5103]: EOF Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2mbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-8lwjf_openshift-ovn-kubernetes(b3efa2c9-9a52-46ea-b9ad-f708dd386e79): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.264730 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.264832 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerStarted","Data":"51fe3ecace5ef60f7821d3b34991b5c62f99813d3db14bf24fd27e56faf1a5e1"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.267201 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 30 00:11:29 crc kubenswrapper[5103]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 30 00:11:29 crc kubenswrapper[5103]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-swfns_openshift-multus(a7dd7e02-4357-4643-8c23-2fb57ba70405): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.267701 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"3c42330e1db35d226e4d0bab62f5575af608323ffc3993bc0551e0f8e21f70b3"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.268873 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.269440 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.269523 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-swfns" podUID="a7dd7e02-4357-4643-8c23-2fb57ba70405" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.271016 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.272369 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 30 00:11:29 crc kubenswrapper[5103]: while [ true ]; Jan 30 00:11:29 crc kubenswrapper[5103]: do Jan 30 00:11:29 crc kubenswrapper[5103]: for f in $(ls /tmp/serviceca); do Jan 30 00:11:29 crc kubenswrapper[5103]: echo $f Jan 30 00:11:29 crc kubenswrapper[5103]: ca_file_path="/tmp/serviceca/${f}" Jan 30 00:11:29 crc kubenswrapper[5103]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 30 00:11:29 crc kubenswrapper[5103]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 30 00:11:29 crc kubenswrapper[5103]: if [ -e "${reg_dir_path}" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:29 crc kubenswrapper[5103]: else Jan 30 00:11:29 crc kubenswrapper[5103]: mkdir $reg_dir_path Jan 30 00:11:29 crc kubenswrapper[5103]: cp $ca_file_path $reg_dir_path/ca.crt Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: for d in $(ls /etc/docker/certs.d); do Jan 30 00:11:29 crc kubenswrapper[5103]: echo $d Jan 30 00:11:29 crc kubenswrapper[5103]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 30 00:11:29 crc kubenswrapper[5103]: reg_conf_path="/tmp/serviceca/${dp}" Jan 30 00:11:29 crc kubenswrapper[5103]: if [ ! -e "${reg_conf_path}" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: rm -rf /etc/docker/certs.d/$d Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: sleep 60 & wait ${!} Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-226mj_openshift-image-registry(a0b75391-d8bb-4610-a69e-1f5c3a4e4eef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.272855 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 30 00:11:29 crc kubenswrapper[5103]: set -euo pipefail Jan 30 00:11:29 crc kubenswrapper[5103]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 30 00:11:29 crc kubenswrapper[5103]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 30 00:11:29 crc kubenswrapper[5103]: # As the secret mount is optional we must wait for the files to be present. Jan 30 00:11:29 crc kubenswrapper[5103]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 30 00:11:29 crc kubenswrapper[5103]: TS=$(date +%s) Jan 30 00:11:29 crc kubenswrapper[5103]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 30 00:11:29 crc kubenswrapper[5103]: HAS_LOGGED_INFO=0 Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: log_missing_certs(){ Jan 30 00:11:29 crc kubenswrapper[5103]: CUR_TS=$(date +%s) Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 30 00:11:29 crc kubenswrapper[5103]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 30 00:11:29 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 30 00:11:29 crc kubenswrapper[5103]: HAS_LOGGED_INFO=1 Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: } Jan 30 00:11:29 crc kubenswrapper[5103]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 30 00:11:29 crc kubenswrapper[5103]: log_missing_certs Jan 30 00:11:29 crc kubenswrapper[5103]: sleep 5 Jan 30 00:11:29 crc kubenswrapper[5103]: done Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 30 00:11:29 crc kubenswrapper[5103]: exec /usr/bin/kube-rbac-proxy \ Jan 30 00:11:29 crc kubenswrapper[5103]: --logtostderr \ Jan 30 00:11:29 crc kubenswrapper[5103]: --secure-listen-address=:9108 \ Jan 30 00:11:29 crc kubenswrapper[5103]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 30 00:11:29 crc kubenswrapper[5103]: --upstream=http://127.0.0.1:29108/ \ Jan 30 00:11:29 crc kubenswrapper[5103]: --tls-private-key-file=${TLS_PK} \ Jan 30 00:11:29 crc kubenswrapper[5103]: --tls-cert-file=${TLS_CERT} Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.273985 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-226mj" podUID="a0b75391-d8bb-4610-a69e-1f5c3a4e4eef" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.274719 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-6g6hp_openshift-machine-config-operator(37f6985e-a0c9-43c8-a1bc-00f85204425f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.276238 5103 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 30 00:11:29 crc kubenswrapper[5103]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ -f "/env/_master" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: set -o allexport Jan 30 00:11:29 crc kubenswrapper[5103]: source "/env/_master" Jan 30 00:11:29 crc kubenswrapper[5103]: set +o allexport Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "" != "" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: # This is needed so that converting clusters from GA to TP Jan 30 00:11:29 crc kubenswrapper[5103]: # will rollout control plane pods as well Jan 30 00:11:29 crc kubenswrapper[5103]: network_segmentation_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" != "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_enabled_flag="--enable-multi-network" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: route_advertisements_enable_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: # Enable multi-network policy if configured (control-plane always full mode) Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_policy_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "false" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: # Enable admin network policy if configured (control-plane always full mode) Jan 30 00:11:29 crc kubenswrapper[5103]: admin_network_policy_enabled_flag= Jan 30 00:11:29 crc kubenswrapper[5103]: if [[ "true" == "true" ]]; then Jan 30 00:11:29 crc kubenswrapper[5103]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: if [ "shared" == "shared" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode shared" Jan 30 00:11:29 crc kubenswrapper[5103]: elif [ "shared" == "local" ]; then Jan 30 00:11:29 crc kubenswrapper[5103]: gateway_mode_flags="--gateway-mode local" Jan 30 00:11:29 crc kubenswrapper[5103]: else Jan 30 00:11:29 crc kubenswrapper[5103]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 30 00:11:29 crc kubenswrapper[5103]: exit 1 Jan 30 00:11:29 crc kubenswrapper[5103]: fi Jan 30 00:11:29 crc kubenswrapper[5103]: Jan 30 00:11:29 crc kubenswrapper[5103]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 30 00:11:29 crc kubenswrapper[5103]: exec /usr/bin/ovnkube \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-interconnect \ Jan 30 00:11:29 crc kubenswrapper[5103]: --init-cluster-manager "${K8S_NODE}" \ Jan 30 00:11:29 crc kubenswrapper[5103]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 30 00:11:29 crc kubenswrapper[5103]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 30 00:11:29 crc kubenswrapper[5103]: --metrics-bind-address "127.0.0.1:29108" \ Jan 30 00:11:29 crc kubenswrapper[5103]: --metrics-enable-pprof \ Jan 30 00:11:29 crc kubenswrapper[5103]: --metrics-enable-config-duration \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v4_join_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v6_join_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${dns_name_resolver_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${persistent_ips_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${multi_network_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${network_segmentation_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${gateway_mode_flags} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${route_advertisements_enable_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${preconfigured_udn_addresses_enable_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-ip=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-firewall=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-qos=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-egress-service=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-multicast \ Jan 30 00:11:29 crc kubenswrapper[5103]: --enable-multi-external-gateway=true \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${multi_network_policy_enabled_flag} \ Jan 30 00:11:29 crc kubenswrapper[5103]: ${admin_network_policy_enabled_flag} Jan 30 00:11:29 crc kubenswrapper[5103]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prndc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k7mv6_openshift-ovn-kubernetes(7d918c96-a16b-4836-ac5a-83c3388f5468): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 30 00:11:29 crc kubenswrapper[5103]: > logger="UnhandledError" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.276473 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.277896 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.295311 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.336686 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361563 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361617 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361635 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.361647 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.374904 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.414647 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.457973 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464005 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464042 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464077 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464098 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.464113 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.496170 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.538647 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566118 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566214 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566233 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566258 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.566277 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.579944 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.619520 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.656828 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672015 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672103 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672122 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672150 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.672174 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.680535 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.680773 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.681043 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:31.681005483 +0000 UTC m=+81.552503575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.714943 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.736279 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743447 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743503 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743523 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743547 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.743563 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.758193 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762380 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762447 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762461 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762479 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.762493 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.778388 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.778500 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784024 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784196 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784239 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784316 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.784384 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.801408 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805480 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805519 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805530 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805547 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.805557 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.819695 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.820292 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824637 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824675 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824738 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824752 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.824764 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.839839 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.839991 5103 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841586 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841628 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841640 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841654 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.841664 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.854907 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.867673 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.867877 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.867905 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.868109 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.868166 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:29 crc kubenswrapper[5103]: E0130 00:11:29.868228 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.895652 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.935384 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.943925 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.943968 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.943983 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.944006 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.944023 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:29Z","lastTransitionTime":"2026-01-30T00:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:29 crc kubenswrapper[5103]: I0130 00:11:29.989571 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.015928 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.046944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.046998 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.047017 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.047044 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.047110 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.062573 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.088898 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.089117 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:32.0890861 +0000 UTC m=+81.960584172 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.096862 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.138110 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149245 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149310 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149333 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149367 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.149388 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.174430 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.215942 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251653 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251664 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251681 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.251693 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.256820 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.354517 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355302 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355352 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355378 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.355397 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457668 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457715 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457726 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.457755 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.492932 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.493145 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493220 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493338 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493377 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.493342505 +0000 UTC m=+84.364840597 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.493435 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.493412036 +0000 UTC m=+84.364910128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560496 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560578 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560594 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560621 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.560635 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.594169 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.594240 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594412 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594433 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594445 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594458 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594506 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594525 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.594506101 +0000 UTC m=+84.466004153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594526 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.594573 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:34.594564362 +0000 UTC m=+84.466062424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663780 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663880 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663908 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663944 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.663964 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.766945 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767012 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767023 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767068 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.767080 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.868205 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:30 crc kubenswrapper[5103]: E0130 00:11:30.868407 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869226 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869349 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869406 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.869421 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.880490 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.891779 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.906192 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.918331 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.931271 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.943011 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.960788 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971280 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971353 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971369 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971413 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.971437 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:30Z","lastTransitionTime":"2026-01-30T00:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.975562 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:30 crc kubenswrapper[5103]: I0130 00:11:30.989284 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.009644 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.023439 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.037525 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.052424 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.061719 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.069210 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073397 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073407 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073427 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.073439 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.078959 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.096202 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.106409 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.120727 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175648 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175696 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175707 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175725 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.175737 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.278379 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279263 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279524 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279574 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.279603 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.382870 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383265 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383334 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.383378 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486362 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486377 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486396 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.486410 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.588929 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.588980 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.588989 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.589003 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.589030 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691649 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691698 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691707 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691722 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.691734 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.706250 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.706347 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.706400 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:35.706386693 +0000 UTC m=+85.577884745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794383 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794420 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.794432 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.867713 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.867713 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.867962 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.867966 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.868105 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:31 crc kubenswrapper[5103]: E0130 00:11:31.868172 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.874591 5103 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896816 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896871 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896887 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896911 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.896929 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.998813 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999037 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999074 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999092 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:31 crc kubenswrapper[5103]: I0130 00:11:31.999107 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:31Z","lastTransitionTime":"2026-01-30T00:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101316 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101528 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.101541 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.109749 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:32 crc kubenswrapper[5103]: E0130 00:11:32.110124 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:36.110081023 +0000 UTC m=+85.981579115 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.204907 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.204968 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.204983 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.205005 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.205017 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307950 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307977 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.307997 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410741 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410816 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410830 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410879 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.410892 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513335 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513408 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513424 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513442 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.513454 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.615957 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616009 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616018 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616035 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.616062 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718578 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718660 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718673 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718695 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.718728 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820869 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820913 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820924 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820942 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.820953 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.868402 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:32 crc kubenswrapper[5103]: E0130 00:11:32.868922 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923022 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923091 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923101 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923117 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:32 crc kubenswrapper[5103]: I0130 00:11:32.923128 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:32Z","lastTransitionTime":"2026-01-30T00:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.025939 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026011 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026026 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026070 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.026087 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129171 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129246 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129261 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129285 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.129320 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232531 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232579 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232589 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232604 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.232615 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335557 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335630 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335644 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335674 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.335694 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438735 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438796 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438811 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438838 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.438850 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542318 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542387 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542402 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542427 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.542443 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645424 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645492 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645508 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645531 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.645545 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747541 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747593 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747605 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747623 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.747636 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850058 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850116 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850127 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850152 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.850165 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.871877 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.871928 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.871879 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:33 crc kubenswrapper[5103]: E0130 00:11:33.872111 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:33 crc kubenswrapper[5103]: E0130 00:11:33.872223 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:33 crc kubenswrapper[5103]: E0130 00:11:33.872352 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953176 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953267 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953410 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:33 crc kubenswrapper[5103]: I0130 00:11:33.953439 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:33Z","lastTransitionTime":"2026-01-30T00:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055812 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055829 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055894 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.055914 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158496 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158554 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158566 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158583 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.158595 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.218431 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.219534 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.219793 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260664 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260749 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260775 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.260832 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364213 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364527 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364686 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.364886 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.365016 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468455 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468488 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468521 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.468540 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.542545 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.542650 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.542782 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.542870 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.542845704 +0000 UTC m=+92.414343796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.543538 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.543597 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.543579702 +0000 UTC m=+92.415077784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570928 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570949 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570975 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.570993 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.643934 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.644024 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644235 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644278 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644296 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.644363699 +0000 UTC m=+92.515861781 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644922 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.644990 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.645015 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.645209 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:42.645170869 +0000 UTC m=+92.516668951 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674029 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674116 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674135 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674161 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.674186 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777209 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777283 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777302 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777326 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.777461 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.867511 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:34 crc kubenswrapper[5103]: E0130 00:11:34.867762 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880435 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880460 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880492 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.880516 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.983509 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.983985 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.984239 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.984383 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:34 crc kubenswrapper[5103]: I0130 00:11:34.984554 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:34Z","lastTransitionTime":"2026-01-30T00:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.087705 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.088702 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.088887 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.089108 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.089246 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192456 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192529 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192548 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192577 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.192597 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294691 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294746 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294762 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.294803 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397348 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397391 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397400 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397419 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.397439 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500221 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500272 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500281 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500296 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.500307 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602631 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602692 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602708 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602728 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.602741 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704911 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704954 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.704986 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.705009 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.759362 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.759577 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.759707 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:43.759679526 +0000 UTC m=+93.631177608 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.807948 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808012 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808024 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808043 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.808076 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.868313 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.868388 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.868467 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.868590 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.868754 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:35 crc kubenswrapper[5103]: E0130 00:11:35.868939 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911034 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911157 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911191 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911225 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:35 crc kubenswrapper[5103]: I0130 00:11:35.911248 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:35Z","lastTransitionTime":"2026-01-30T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013754 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013852 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013873 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013903 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.013924 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.116949 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117011 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117028 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117081 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.117125 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.165775 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:36 crc kubenswrapper[5103]: E0130 00:11:36.166142 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:11:44.166110043 +0000 UTC m=+94.037608135 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219803 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219891 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219911 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219936 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.219954 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322211 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322299 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322360 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.322383 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425332 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425401 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425423 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425447 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.425465 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528715 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528803 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528828 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528859 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.528878 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631798 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631846 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631858 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631878 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.631891 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734789 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734902 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734922 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734951 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.734970 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837516 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837574 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837587 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837617 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.837630 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.867848 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:36 crc kubenswrapper[5103]: E0130 00:11:36.868027 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940040 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940146 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940165 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940191 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:36 crc kubenswrapper[5103]: I0130 00:11:36.940212 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:36Z","lastTransitionTime":"2026-01-30T00:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.042926 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043017 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043105 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043142 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.043165 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.091212 5103 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146333 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146413 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146437 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146470 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.146495 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249211 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249278 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249295 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249322 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.249341 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383407 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383433 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383486 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.383513 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486458 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486681 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486717 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486752 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.486776 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591538 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591622 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591651 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591685 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.591710 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694178 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694282 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694316 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694351 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.694378 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797212 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797280 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797299 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797325 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.797345 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.867482 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:37 crc kubenswrapper[5103]: E0130 00:11:37.867675 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.867697 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:37 crc kubenswrapper[5103]: E0130 00:11:37.867857 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.867905 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:37 crc kubenswrapper[5103]: E0130 00:11:37.868002 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900101 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900179 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900201 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900227 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:37 crc kubenswrapper[5103]: I0130 00:11:37.900248 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:37Z","lastTransitionTime":"2026-01-30T00:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003651 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003738 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003758 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003788 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.003809 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106730 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106803 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106857 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.106881 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210485 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210587 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210615 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210650 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.210674 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313442 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313508 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313530 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313555 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.313707 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416309 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416334 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416360 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.416381 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518613 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518683 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518703 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518729 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.518750 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621560 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621630 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621649 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621679 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.621700 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724268 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724328 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724345 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724370 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.724391 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827538 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827582 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827596 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827612 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.827623 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.867700 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:38 crc kubenswrapper[5103]: E0130 00:11:38.867913 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930149 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930218 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930235 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:38 crc kubenswrapper[5103]: I0130 00:11:38.930283 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:38Z","lastTransitionTime":"2026-01-30T00:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033247 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033271 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033306 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.033329 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.136495 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.136894 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.137025 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.137212 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.137339 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240356 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240430 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240450 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240478 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.240504 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343021 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343119 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343139 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343171 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.343195 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445578 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445649 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445668 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445693 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.445712 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549300 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549374 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549395 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549426 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.549444 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652300 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652403 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652562 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652642 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.652667 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755161 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755209 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755220 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.755249 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857741 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857824 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857850 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857882 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.857904 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.867563 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.867626 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.867564 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:39 crc kubenswrapper[5103]: E0130 00:11:39.867807 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:39 crc kubenswrapper[5103]: E0130 00:11:39.867975 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:39 crc kubenswrapper[5103]: E0130 00:11:39.868779 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960312 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960321 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960337 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:39 crc kubenswrapper[5103]: I0130 00:11:39.960347 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:39Z","lastTransitionTime":"2026-01-30T00:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062637 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062687 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062700 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062725 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.062737 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158671 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158718 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158735 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158755 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.158767 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.170477 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173870 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173918 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173950 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.173961 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.184456 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188031 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188093 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188129 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188145 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.188153 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.199096 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202693 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202764 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202813 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.202832 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.214068 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218711 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218745 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218754 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218767 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.218779 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.226661 5103 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1ea1cdda-a321-4572-bf74-7f3caace2231\\\",\\\"systemUUID\\\":\\\"b34fa1b9-01b6-49ac-be3d-2edda0be241f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.226781 5103 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228018 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228079 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228092 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228109 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.228121 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.305795 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerStarted","Data":"031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.308432 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.320133 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329855 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329914 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329931 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329949 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.329962 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.334120 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.344044 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.352793 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.369668 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.380346 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.390775 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.399660 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.406298 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.412436 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.420011 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432536 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432595 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432607 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432623 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.432633 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.433171 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.439667 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.447961 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.456278 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.465738 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.483027 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.493398 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.503830 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534820 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534859 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534869 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534886 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.534897 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638431 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638934 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638952 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638978 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.638996 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741761 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741827 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741847 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741874 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.741894 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.844997 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845094 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845115 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845142 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.845163 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.867790 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:40 crc kubenswrapper[5103]: E0130 00:11:40.868284 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.887749 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.901729 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.919714 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.932914 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947684 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947734 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947745 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947761 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.947774 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:40Z","lastTransitionTime":"2026-01-30T00:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.963696 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.983874 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:40 crc kubenswrapper[5103]: I0130 00:11:40.998039 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.011566 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.025535 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.039409 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051023 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051198 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051216 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051230 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.051778 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.069619 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.079958 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.093468 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.105341 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.117700 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.130367 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.140173 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.153451 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154076 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154124 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154136 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154153 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.154165 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257371 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257425 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257445 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257468 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.257487 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.313117 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"ca66bc51f5182ad2848199e1ce4c53eace8150ce3903b340b402f6cc7f00ed42"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.314971 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerStarted","Data":"6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317162 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" exitCode=0 Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317275 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317330 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.317347 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.324459 5103 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.325399 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.333067 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.345483 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.358923 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359147 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359202 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359213 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.359253 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.372560 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.385884 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.397764 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.417794 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.435148 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.452599 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461636 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461687 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461700 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461721 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.461735 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.473281 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.484934 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.493465 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.501878 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.527305 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.541687 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.562477 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563522 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563569 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563583 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563600 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.563612 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.572494 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.581351 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666159 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666217 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666234 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666257 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.666275 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768576 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768665 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768694 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768724 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.768750 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.867730 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.868360 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:41 crc kubenswrapper[5103]: E0130 00:11:41.868507 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:41 crc kubenswrapper[5103]: E0130 00:11:41.869698 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.869837 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:41 crc kubenswrapper[5103]: E0130 00:11:41.870020 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873686 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873743 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873764 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873791 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.873817 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977238 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977309 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977327 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977354 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:41 crc kubenswrapper[5103]: I0130 00:11:41.977373 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:41Z","lastTransitionTime":"2026-01-30T00:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080244 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080295 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080313 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080332 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.080345 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183726 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183802 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183828 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183861 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.183886 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285808 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285852 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285864 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285881 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.285893 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.323967 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.326971 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f9d4456cff54a878b20f8da7f00f13f75d8988ff57db65c6a3b57af33f1e7207"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.329895 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.331558 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bs8rz" event={"ID":"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e","Type":"ContainerStarted","Data":"d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387714 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387766 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387779 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387802 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.387817 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489841 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489896 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489913 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489937 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.489955 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.555225 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.555345 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555427 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555484 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555548 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.555521256 +0000 UTC m=+108.427019338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.555580 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.555566437 +0000 UTC m=+108.427064519 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.580793 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592255 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592306 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592320 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592340 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.592355 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.594640 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.608726 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.632254 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.645291 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.656940 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.657017 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657173 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657209 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657220 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657298 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.657270286 +0000 UTC m=+108.528768338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657767 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657799 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657811 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.657867 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:58.65785058 +0000 UTC m=+108.529348632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.658189 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.668496 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.681025 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.693153 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694028 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694108 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694123 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694147 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.694161 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.704313 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.717333 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.728978 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.760119 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.773353 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.783085 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.792283 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.795925 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.795981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.795999 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.796022 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.796038 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.800868 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.808129 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.814712 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.822084 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.829274 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.837094 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.850916 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.858883 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.867419 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:42 crc kubenswrapper[5103]: E0130 00:11:42.867779 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.875420 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.886521 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898723 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898790 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898804 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.898842 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:42Z","lastTransitionTime":"2026-01-30T00:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.911807 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.926892 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.934813 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d918c96-a16b-4836-ac5a-83c3388f5468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-prndc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k7mv6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.947722 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf4b7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6tmbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.956585 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"218f92a3-a814-4861-89a2-deac8c1df418\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bee67988b75695f996caff46a71fe3f9d052fc8e0512fe5f5deda903aad50a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4c6a59359c30e8bf14b02b119515a10aecde78e3fff52c7cb4511390b8791f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b617ba472b3001fb536148e5349ef2ac3c834c380f4b1a301378647a444cb29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91d941f6cff7152792d7bbf4322f503aa415ffad2b665a0278c74fa9a8cc0541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.967943 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4456cff54a878b20f8da7f00f13f75d8988ff57db65c6a3b57af33f1e7207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca66bc51f5182ad2848199e1ce4c53eace8150ce3903b340b402f6cc7f00ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.978963 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-swfns" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7dd7e02-4357-4643-8c23-2fb57ba70405\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4t7t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swfns\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:42 crc kubenswrapper[5103]: I0130 00:11:42.987677 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001638 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001688 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001700 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001718 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.001730 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.004439 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.014663 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.024271 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.033431 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104146 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104193 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104205 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104223 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.104236 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.205946 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.205991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.206004 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.206020 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.206030 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308243 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308294 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308306 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.308337 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.338891 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"5bc4f366d49d119d07ce33722a1d708340e6808bb491c23b9a7fa21fa8df1420"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.343187 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.343227 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.346260 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerStarted","Data":"1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.347694 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c00427c4884245a18d4fdb095bd973b778a49a0f7191904be6dec15bdd672466"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.349943 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37f6985e-a0c9-43c8-a1bc-00f85204425f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jtw8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-6g6hp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.350085 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"8380f1a09b9ebf3cdb88be129a121cf08d82551f2019e61e3b89fbec5c6f12b3"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.367459 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c078d17-084e-4a93-bf5d-5307565df3fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://631387ca5c470d6eae47c1f7e98c426da26504decf7bd07cff0834665a05f383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://adc8bc92a51a3da06e89b6c6737a0f67ca5ec49feefabbad834d1fb175f10301\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5206eeb10f9f29f439553967c8635dd99623f501588a6d3f9747d7cb11eb5226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ece460a374d5f87d95cbf73ee1cd725cdd921a2d2ffb32418d899b8f6be7a47c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c49dc9284b665c94db8c5545a3512cb7a7e093905a9eaca591635a87b0ef020\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55e5fb3441c3a4f35f63a3ac543de4dc7db8f97e905870ba0779601d072183d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7f3ea9ccf24fb312a937285233184c92febb994b00d178932a12eaea2a416b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://599f4a2247c5cea2af6483e5ca9806cb3301aa3b4cb57e3b98bf4a423c5f4df4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.380128 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a919e3c-ce3d-4536-94c5-853055b5f031\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9df6fced0c2fa45aff20339a8f79ef39d8c347c90a3d1affbb2b3de2e7247d65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f2a32e4622af343e97ac94d0afec18e3018499e555117e8b302f2544ecb76514\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6ef6a39297a8dd92961a5740c735c00279aa8cc79d82f711420078847ad47406\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.389898 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.400764 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.406991 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bs8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef3f9074-af3f-43f4-ad74-efe1ba4abc8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://d163b6e7c84eb8b849bb1ee928e487432bb5324c921125ffa1574c6bae1285b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:11:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89lmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bs8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410744 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410860 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410918 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.410991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.411076 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.415140 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-226mj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq678\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-226mj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.424028 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566ee5b2-938f-41f6-8625-e8a987181d60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vsrcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.442688 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j2mbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:11:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8lwjf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.454244 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5493b8f7-69d6-4cdf-a7ce-7240e20b3687\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99332c8e9f660ebdb94ff63073a2ee3e6e81c3f561fb066b48b57dec59eafc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://105ae586d920251f83e92d708b3b31a0ac2e20dff56c8f2fca7706a6984f145e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.466195 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dac48a-8ec1-4c4a-a2d9-c3a1567645a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T00:10:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T00:11:24Z\\\",\\\"message\\\":\\\"ube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373654 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 00:11:24.373896 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\"\\\\nI0130 00:11:24.373948 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-143293288/tls.crt::/tmp/serving-cert-143293288/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769731883\\\\\\\\\\\\\\\" (2026-01-30 00:11:22 +0000 UTC to 2026-01-30 00:11:23 +0000 UTC (now=2026-01-30 00:11:24.373876741 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374202 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769731884\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769731884\\\\\\\\\\\\\\\" (2026-01-29 23:11:23 +0000 UTC to 2029-01-29 23:11:23 +0000 UTC (now=2026-01-30 00:11:24.374177978 +0000 UTC))\\\\\\\"\\\\nI0130 00:11:24.374228 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI0130 00:11:24.374252 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 00:11:24.374266 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0130 00:11:24.375007 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375737 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nI0130 00:11:24.375876 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" type=\\\\\\\"*v1.ConfigMap\\\\\\\" reflector=\\\\\\\"k8s.io/client-go@v0.33.2/tools/cache/reflector.go:285\\\\\\\"\\\\nF0130 00:11:24.423188 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T00:11:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T00:10:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T00:10:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T00:10:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T00:10:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.478989 5103 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T00:11:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512710 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512781 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512793 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512814 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.512842 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.547190 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podStartSLOduration=70.54717141 podStartE2EDuration="1m10.54717141s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.546663698 +0000 UTC m=+93.418161760" watchObservedRunningTime="2026-01-30 00:11:43.54717141 +0000 UTC m=+93.418669462" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.606883 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.606847779 podStartE2EDuration="17.606847779s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.593325931 +0000 UTC m=+93.464824003" watchObservedRunningTime="2026-01-30 00:11:43.606847779 +0000 UTC m=+93.478345831" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614656 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614705 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614721 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614739 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.614750 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.636994 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.636977000999998 podStartE2EDuration="17.636977001s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.635758481 +0000 UTC m=+93.507256553" watchObservedRunningTime="2026-01-30 00:11:43.636977001 +0000 UTC m=+93.508475053" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.702903 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-swfns" podStartSLOduration=70.702886341 podStartE2EDuration="1m10.702886341s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.693142954 +0000 UTC m=+93.564641016" watchObservedRunningTime="2026-01-30 00:11:43.702886341 +0000 UTC m=+93.574384393" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.703107 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podStartSLOduration=70.703103626 podStartE2EDuration="1m10.703103626s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.70287015 +0000 UTC m=+93.574368212" watchObservedRunningTime="2026-01-30 00:11:43.703103626 +0000 UTC m=+93.574601678" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717207 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717246 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717256 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717271 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.717282 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.731813 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.731796053 podStartE2EDuration="17.731796053s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.731571417 +0000 UTC m=+93.603069469" watchObservedRunningTime="2026-01-30 00:11:43.731796053 +0000 UTC m=+93.603294105" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.750814 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.750800384 podStartE2EDuration="17.750800384s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.749580544 +0000 UTC m=+93.621078596" watchObservedRunningTime="2026-01-30 00:11:43.750800384 +0000 UTC m=+93.622298436" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.769447 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.769648 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.769745 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:11:59.769721883 +0000 UTC m=+109.641219995 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.817026 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bs8rz" podStartSLOduration=71.817008301 podStartE2EDuration="1m11.817008301s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:43.799606489 +0000 UTC m=+93.671104551" watchObservedRunningTime="2026-01-30 00:11:43.817008301 +0000 UTC m=+93.688506353" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819214 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819262 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819276 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819297 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.819312 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.868233 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.868299 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.868260 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.868389 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.868476 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:43 crc kubenswrapper[5103]: E0130 00:11:43.868718 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921320 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921371 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921386 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921404 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:43 crc kubenswrapper[5103]: I0130 00:11:43.921415 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:43Z","lastTransitionTime":"2026-01-30T00:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024110 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024227 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024252 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024284 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.024312 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127043 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127137 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127155 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127184 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.127202 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.173542 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:11:44 crc kubenswrapper[5103]: E0130 00:11:44.173685 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:00.17366524 +0000 UTC m=+110.045163292 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229818 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229904 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229928 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229962 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.229986 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332521 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332622 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332664 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332699 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.332727 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.355020 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="5bc4f366d49d119d07ce33722a1d708340e6808bb491c23b9a7fa21fa8df1420" exitCode=0 Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.355123 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"5bc4f366d49d119d07ce33722a1d708340e6808bb491c23b9a7fa21fa8df1420"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.362094 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434497 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434548 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434561 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434580 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.434592 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537493 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537560 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537572 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537591 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.537603 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.639953 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640025 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640097 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640128 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.640148 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743368 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743440 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743453 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743472 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.743483 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846418 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846458 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846466 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846481 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.846493 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.868460 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:44 crc kubenswrapper[5103]: E0130 00:11:44.868761 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949539 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949602 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949617 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949639 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:44 crc kubenswrapper[5103]: I0130 00:11:44.949657 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:44Z","lastTransitionTime":"2026-01-30T00:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051839 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051885 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051894 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051907 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.051916 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154444 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154582 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154590 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154604 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.154613 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.257988 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258028 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258036 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258063 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.258073 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360846 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360889 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360898 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360913 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.360925 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463370 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463423 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463436 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463455 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.463467 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566175 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566233 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566245 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566268 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.566284 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668236 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668293 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668305 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.668337 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771787 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771830 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771844 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771864 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.771879 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.867809 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.867811 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:45 crc kubenswrapper[5103]: E0130 00:11:45.868244 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.868297 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:45 crc kubenswrapper[5103]: E0130 00:11:45.868430 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:45 crc kubenswrapper[5103]: E0130 00:11:45.867980 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874410 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874451 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874462 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874479 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.874490 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.976901 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.976976 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.976995 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.977026 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:45 crc kubenswrapper[5103]: I0130 00:11:45.977089 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:45Z","lastTransitionTime":"2026-01-30T00:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080287 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080330 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080341 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080361 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.080374 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.182585 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183264 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183286 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183312 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.183331 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288525 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288577 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288589 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288609 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.288623 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.370242 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-226mj" event={"ID":"a0b75391-d8bb-4610-a69e-1f5c3a4e4eef","Type":"ContainerStarted","Data":"a0930183f1f4292ce8a16800710c911eabe584c23ad6a0c11628c72ca3f07140"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.375129 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"1f2cf4a25105d9ac9f14e0cee69917668cb0c2b30471ac5ca7cb5fb06f4fa4e0"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.379229 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.386802 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-226mj" podStartSLOduration=73.386788938 podStartE2EDuration="1m13.386788938s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:46.386518002 +0000 UTC m=+96.258016064" watchObservedRunningTime="2026-01-30 00:11:46.386788938 +0000 UTC m=+96.258286990" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.391728 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.391855 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.391955 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.392031 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.392129 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.494981 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495032 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495061 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495082 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.495097 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599069 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599135 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599158 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.599198 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701438 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701481 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701490 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701505 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.701515 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803594 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803611 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803633 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.803651 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.867674 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:46 crc kubenswrapper[5103]: E0130 00:11:46.868122 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.868465 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:46 crc kubenswrapper[5103]: E0130 00:11:46.868866 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905189 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905275 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905324 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905348 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:46 crc kubenswrapper[5103]: I0130 00:11:46.905364 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:46Z","lastTransitionTime":"2026-01-30T00:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007823 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007895 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007912 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007943 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.007961 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110759 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110823 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110840 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110865 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.110885 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.212990 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213082 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213095 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213118 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.213129 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315303 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315366 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315379 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315401 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.315413 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.385209 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"d70a15da4267dab2faca43e16238c605ba7c8b5aba4f4f76d7eb2342b799a2e0"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417801 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417856 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417868 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417888 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.417918 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519764 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519811 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519827 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519849 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.519867 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621726 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621802 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621829 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.621885 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724770 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724822 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724835 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724853 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.724866 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.826954 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827043 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827112 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827143 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.827164 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.867584 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.867629 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.867761 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:47 crc kubenswrapper[5103]: E0130 00:11:47.867776 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:47 crc kubenswrapper[5103]: E0130 00:11:47.867996 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:47 crc kubenswrapper[5103]: E0130 00:11:47.868201 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930365 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930430 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930449 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930477 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:47 crc kubenswrapper[5103]: I0130 00:11:47.930570 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:47Z","lastTransitionTime":"2026-01-30T00:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033236 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033304 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033325 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033352 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.033371 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136315 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136401 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136429 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136462 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.136485 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238773 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238821 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238831 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238847 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.238859 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341329 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341847 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341882 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.341895 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.391202 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="1f2cf4a25105d9ac9f14e0cee69917668cb0c2b30471ac5ca7cb5fb06f4fa4e0" exitCode=0 Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.391256 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"1f2cf4a25105d9ac9f14e0cee69917668cb0c2b30471ac5ca7cb5fb06f4fa4e0"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443463 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443521 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443532 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443548 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.443557 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546278 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546348 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546367 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546395 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.546417 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649100 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649210 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649241 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.649260 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751584 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751603 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751632 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.751653 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854625 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854757 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854776 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.854826 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.873706 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:48 crc kubenswrapper[5103]: E0130 00:11:48.873847 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957449 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957524 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957544 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957571 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:48 crc kubenswrapper[5103]: I0130 00:11:48.957589 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:48Z","lastTransitionTime":"2026-01-30T00:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.059889 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.059975 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.059995 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.060023 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.060043 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.162915 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.162974 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.162991 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.163017 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.163076 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265742 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265817 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265836 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.265881 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368812 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368905 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368932 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368969 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.368994 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471696 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471767 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471785 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471810 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.471831 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575125 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575185 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575197 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575217 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.575231 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678715 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678799 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678826 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.678889 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.781955 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782022 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782075 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782099 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.782117 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.868064 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:49 crc kubenswrapper[5103]: E0130 00:11:49.868261 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.868423 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:49 crc kubenswrapper[5103]: E0130 00:11:49.868651 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.868700 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:49 crc kubenswrapper[5103]: E0130 00:11:49.868773 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884683 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884789 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884807 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884832 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.884849 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987388 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987460 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987473 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987501 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:49 crc kubenswrapper[5103]: I0130 00:11:49.987516 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:49Z","lastTransitionTime":"2026-01-30T00:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090417 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090478 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090488 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090511 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.090521 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193203 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193269 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193282 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193301 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.193317 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295594 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295651 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295667 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295692 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.295708 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400797 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400863 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400873 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400907 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.400919 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.503732 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504240 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504250 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504266 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.504277 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588816 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588850 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588862 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588880 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.588894 5103 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T00:11:50Z","lastTransitionTime":"2026-01-30T00:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.631918 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h"] Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.640846 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.644062 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.644110 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.644087 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.645315 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770315 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f5044c8-5ef7-4573-b468-23f35b0a9776-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770468 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f5044c8-5ef7-4573-b468-23f35b0a9776-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770629 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f5044c8-5ef7-4573-b468-23f35b0a9776-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770775 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.770918 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.853507 5103 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.862741 5103 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.868044 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:50 crc kubenswrapper[5103]: E0130 00:11:50.868213 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872390 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f5044c8-5ef7-4573-b468-23f35b0a9776-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872472 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872498 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872569 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872620 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f5044c8-5ef7-4573-b468-23f35b0a9776-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872637 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f5044c8-5ef7-4573-b468-23f35b0a9776-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.872697 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5f5044c8-5ef7-4573-b468-23f35b0a9776-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.874135 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f5044c8-5ef7-4573-b468-23f35b0a9776-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.881917 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f5044c8-5ef7-4573-b468-23f35b0a9776-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.888482 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f5044c8-5ef7-4573-b468-23f35b0a9776-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-td66h\" (UID: \"5f5044c8-5ef7-4573-b468-23f35b0a9776\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: I0130 00:11:50.965119 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" Jan 30 00:11:50 crc kubenswrapper[5103]: W0130 00:11:50.976931 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5044c8_5ef7_4573_b468_23f35b0a9776.slice/crio-5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a WatchSource:0}: Error finding container 5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a: Status 404 returned error can't find the container with id 5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.413107 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="c3c72b6d4a189f1a50f6897c8bae426da23207df7906db7f2c038123cb36e44d" exitCode=0 Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.413236 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"c3c72b6d4a189f1a50f6897c8bae426da23207df7906db7f2c038123cb36e44d"} Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.424246 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerStarted","Data":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.425968 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" event={"ID":"5f5044c8-5ef7-4573-b468-23f35b0a9776","Type":"ContainerStarted","Data":"5b957ef0732a85b3e4b0ce8b24a29a2dc0cbeed0a7035bc2a656228784c04a8a"} Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.660365 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.660427 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.660447 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.708410 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podStartSLOduration=78.70838096 podStartE2EDuration="1m18.70838096s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:51.707784506 +0000 UTC m=+101.579282588" watchObservedRunningTime="2026-01-30 00:11:51.70838096 +0000 UTC m=+101.579879032" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.749033 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.749711 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.867855 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:51 crc kubenswrapper[5103]: E0130 00:11:51.868175 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.868035 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:51 crc kubenswrapper[5103]: E0130 00:11:51.868391 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:51 crc kubenswrapper[5103]: I0130 00:11:51.868221 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:51 crc kubenswrapper[5103]: E0130 00:11:51.868582 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:52 crc kubenswrapper[5103]: I0130 00:11:52.432722 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"5364d6229c6502e47e15e4bde438c16569a5aa76c2b433051ad6651c6f257d58"} Jan 30 00:11:52 crc kubenswrapper[5103]: I0130 00:11:52.434592 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" event={"ID":"5f5044c8-5ef7-4573-b468-23f35b0a9776","Type":"ContainerStarted","Data":"69589db04036bfeab22e07f3489d4c166326d42aa7c5c626379206f4bba0b2ea"} Jan 30 00:11:52 crc kubenswrapper[5103]: I0130 00:11:52.867972 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:52 crc kubenswrapper[5103]: E0130 00:11:52.868221 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.442299 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="5364d6229c6502e47e15e4bde438c16569a5aa76c2b433051ad6651c6f257d58" exitCode=0 Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.442371 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"5364d6229c6502e47e15e4bde438c16569a5aa76c2b433051ad6651c6f257d58"} Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.466688 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-td66h" podStartSLOduration=81.466668636 podStartE2EDuration="1m21.466668636s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:11:52.482256237 +0000 UTC m=+102.353754309" watchObservedRunningTime="2026-01-30 00:11:53.466668636 +0000 UTC m=+103.338166698" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.867278 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.867335 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:53 crc kubenswrapper[5103]: I0130 00:11:53.867283 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:53 crc kubenswrapper[5103]: E0130 00:11:53.867439 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:53 crc kubenswrapper[5103]: E0130 00:11:53.867576 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:53 crc kubenswrapper[5103]: E0130 00:11:53.867648 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:54 crc kubenswrapper[5103]: I0130 00:11:54.451744 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"3130c46399919346bb8566f5e47a88474c211d6208f3a2f8731a9ac4957000e6"} Jan 30 00:11:54 crc kubenswrapper[5103]: I0130 00:11:54.867732 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:54 crc kubenswrapper[5103]: E0130 00:11:54.867872 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.463440 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vsrcq"] Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.463659 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:55 crc kubenswrapper[5103]: E0130 00:11:55.463819 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.867886 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:55 crc kubenswrapper[5103]: E0130 00:11:55.867990 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:55 crc kubenswrapper[5103]: I0130 00:11:55.868440 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:55 crc kubenswrapper[5103]: E0130 00:11:55.868497 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:56 crc kubenswrapper[5103]: I0130 00:11:56.868314 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:56 crc kubenswrapper[5103]: I0130 00:11:56.868379 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:56 crc kubenswrapper[5103]: E0130 00:11:56.868611 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:56 crc kubenswrapper[5103]: E0130 00:11:56.868989 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:57 crc kubenswrapper[5103]: I0130 00:11:57.867557 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:57 crc kubenswrapper[5103]: E0130 00:11:57.867765 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:57 crc kubenswrapper[5103]: I0130 00:11:57.867801 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:57 crc kubenswrapper[5103]: E0130 00:11:57.867918 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.576990 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.577147 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577200 5103 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577295 5103 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577318 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.577287547 +0000 UTC m=+140.448785629 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.577394 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.577365939 +0000 UTC m=+140.448864031 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.679031 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.679153 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679381 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679408 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679429 5103 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679510 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.679486808 +0000 UTC m=+140.550984890 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679509 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679592 5103 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679622 5103 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.679756 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.679717644 +0000 UTC m=+140.551215736 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.868125 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.868282 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.869532 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:58 crc kubenswrapper[5103]: I0130 00:11:58.869733 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.869830 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:11:58 crc kubenswrapper[5103]: E0130 00:11:58.870541 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.473590 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="3130c46399919346bb8566f5e47a88474c211d6208f3a2f8731a9ac4957000e6" exitCode=0 Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.473671 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"3130c46399919346bb8566f5e47a88474c211d6208f3a2f8731a9ac4957000e6"} Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.794955 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.795153 5103 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.795226 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs podName:566ee5b2-938f-41f6-8625-e8a987181d60 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.795210604 +0000 UTC m=+141.666708656 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs") pod "network-metrics-daemon-vsrcq" (UID: "566ee5b2-938f-41f6-8625-e8a987181d60") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.868228 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.868365 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 30 00:11:59 crc kubenswrapper[5103]: I0130 00:11:59.868472 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:11:59 crc kubenswrapper[5103]: E0130 00:11:59.868677 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.198675 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:00 crc kubenswrapper[5103]: E0130 00:12:00.198993 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.198951276 +0000 UTC m=+142.070449378 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.480494 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"8cfe21d64c3f3bd883b20a431e0d46df831383bde83f5c38c04c36b9c506f63b"} Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.867569 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.869856 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:00 crc kubenswrapper[5103]: E0130 00:12:00.870028 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 30 00:12:00 crc kubenswrapper[5103]: E0130 00:12:00.870221 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vsrcq" podUID="566ee5b2-938f-41f6-8625-e8a987181d60" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.907590 5103 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.907876 5103 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 30 00:12:00 crc kubenswrapper[5103]: I0130 00:12:00.956875 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-clmhf"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.006107 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.006653 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.016253 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-6z46s"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.027718 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.028638 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.028929 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029220 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029342 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029440 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.029706 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.030149 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.030428 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.031594 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.039167 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.109613 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.109885 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-audit\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.109965 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110067 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110161 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-node-pullsecrets\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110238 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110317 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-encryption-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110394 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-serving-cert\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110497 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110575 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-audit-dir\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110650 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-client\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110721 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110793 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-image-import-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110866 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.110946 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnphx\" (UniqueName: \"kubernetes.io/projected/e9100695-b78d-4b2f-9cea-9d022064c792-kube-api-access-jnphx\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.111026 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.111124 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.155022 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-qsf67"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.155172 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.155679 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.161809 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162089 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162230 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162467 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162612 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162784 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.162938 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.163072 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.165281 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.165486 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.165786 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166102 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166284 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166460 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.166704 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.174948 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.179147 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2xrjj"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.192571 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.192783 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.192805 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196222 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196575 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196591 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.196661 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200345 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200513 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200638 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.200761 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.201004 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.201367 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.213107 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.214288 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.214985 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-audit\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215094 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215132 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-serving-ca\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215167 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-serving-cert\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215191 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-encryption-config\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215337 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215383 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-node-pullsecrets\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215409 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215540 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-encryption-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215554 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-audit\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-serving-cert\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.216344 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-node-pullsecrets\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.216504 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.215969 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217330 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217376 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-audit-dir\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217410 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-trusted-ca-bundle\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217436 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-policies\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217456 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vcrb\" (UniqueName: \"kubernetes.io/projected/a0ff7eb1-7b00-4318-936e-30862acd97e5-kube-api-access-6vcrb\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217490 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-client\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217860 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9100695-b78d-4b2f-9cea-9d022064c792-audit-dir\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217884 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217896 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217956 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-image-import-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.217996 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218019 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-client\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218061 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jnphx\" (UniqueName: \"kubernetes.io/projected/e9100695-b78d-4b2f-9cea-9d022064c792-kube-api-access-jnphx\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218079 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-dir\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218117 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.218134 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.219617 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.220155 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.220492 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.220759 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9100695-b78d-4b2f-9cea-9d022064c792-image-import-ca\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.224864 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-encryption-config\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.228473 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-serving-cert\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.228646 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.229687 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9100695-b78d-4b2f-9cea-9d022064c792-etcd-client\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.235238 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-4rfkh"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.235454 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.236005 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"controller-manager-65b6cccf98-spmxr\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.237287 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnphx\" (UniqueName: \"kubernetes.io/projected/e9100695-b78d-4b2f-9cea-9d022064c792-kube-api-access-jnphx\") pod \"apiserver-9ddfb9f55-clmhf\" (UID: \"e9100695-b78d-4b2f-9cea-9d022064c792\") " pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.237523 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238077 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238273 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238909 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.238916 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.239220 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.319759 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-serving-ca\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320369 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320516 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndb68\" (UniqueName: \"kubernetes.io/projected/f80439cc-c38d-4210-a203-f478704d9dcd-kube-api-access-ndb68\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320578 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-serving-cert\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.320608 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-encryption-config\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321416 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f80439cc-c38d-4210-a203-f478704d9dcd-machine-approver-tls\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321458 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321482 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321536 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-serving-ca\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321916 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321967 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-trusted-ca-bundle\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.321995 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-policies\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322017 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vcrb\" (UniqueName: \"kubernetes.io/projected/a0ff7eb1-7b00-4318-936e-30862acd97e5-kube-api-access-6vcrb\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322084 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdfs\" (UniqueName: \"kubernetes.io/projected/91703ab7-2f05-4831-8200-85210adf830b-kube-api-access-7kdfs\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322109 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322179 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-client\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322207 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-dir\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322245 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322303 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91703ab7-2f05-4831-8200-85210adf830b-serving-cert\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322329 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91703ab7-2f05-4831-8200-85210adf830b-available-featuregates\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322366 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-auth-proxy-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322403 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-trusted-ca-bundle\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.322786 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-dir\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.323254 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ff7eb1-7b00-4318-936e-30862acd97e5-audit-policies\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.324460 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-serving-cert\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.326914 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-etcd-client\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.327193 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.329006 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ff7eb1-7b00-4318-936e-30862acd97e5-encryption-config\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.346195 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vcrb\" (UniqueName: \"kubernetes.io/projected/a0ff7eb1-7b00-4318-936e-30862acd97e5-kube-api-access-6vcrb\") pod \"apiserver-8596bd845d-6z46s\" (UID: \"a0ff7eb1-7b00-4318-936e-30862acd97e5\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.348063 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.348245 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.356075 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.356209 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.356887 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.357261 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.357608 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.372341 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424089 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424157 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndb68\" (UniqueName: \"kubernetes.io/projected/f80439cc-c38d-4210-a203-f478704d9dcd-kube-api-access-ndb68\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424361 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-serving-cert\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424481 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f80439cc-c38d-4210-a203-f478704d9dcd-machine-approver-tls\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424519 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424535 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424585 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424661 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdfs\" (UniqueName: \"kubernetes.io/projected/91703ab7-2f05-4831-8200-85210adf830b-kube-api-access-7kdfs\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424681 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424745 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424766 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91703ab7-2f05-4831-8200-85210adf830b-serving-cert\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424782 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91703ab7-2f05-4831-8200-85210adf830b-available-featuregates\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424819 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-trusted-ca\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424840 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-auth-proxy-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424863 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lw4q\" (UniqueName: \"kubernetes.io/projected/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-kube-api-access-6lw4q\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.424888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-config\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425319 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425847 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91703ab7-2f05-4831-8200-85210adf830b-available-featuregates\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425965 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.425989 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-auth-proxy-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.428228 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.429581 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91703ab7-2f05-4831-8200-85210adf830b-serving-cert\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.430306 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f80439cc-c38d-4210-a203-f478704d9dcd-config\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.431460 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.440349 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f80439cc-c38d-4210-a203-f478704d9dcd-machine-approver-tls\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.444899 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdfs\" (UniqueName: \"kubernetes.io/projected/91703ab7-2f05-4831-8200-85210adf830b-kube-api-access-7kdfs\") pod \"openshift-config-operator-5777786469-2xrjj\" (UID: \"91703ab7-2f05-4831-8200-85210adf830b\") " pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.446293 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndb68\" (UniqueName: \"kubernetes.io/projected/f80439cc-c38d-4210-a203-f478704d9dcd-kube-api-access-ndb68\") pod \"machine-approver-54c688565-qsf67\" (UID: \"f80439cc-c38d-4210-a203-f478704d9dcd\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.456438 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"route-controller-manager-776cdc94d6-7csdm\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.468415 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29495520-x6t57"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.477423 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.489854 5103 generic.go:358] "Generic (PLEG): container finished" podID="2ed60012-d4e8-45fd-b124-fe7d6ca49ca0" containerID="8cfe21d64c3f3bd883b20a431e0d46df831383bde83f5c38c04c36b9c506f63b" exitCode=0 Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.490012 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.511947 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.523415 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526213 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4022194a-f5e9-494f-b079-ddd414c3da50-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526290 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526333 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4022194a-f5e9-494f-b079-ddd414c3da50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526413 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-trusted-ca\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526487 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6lw4q\" (UniqueName: \"kubernetes.io/projected/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-kube-api-access-6lw4q\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526510 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskwx\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-kube-api-access-lskwx\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526529 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-config\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526551 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-serving-cert\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.526594 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.527992 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-config\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.528266 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-trusted-ca\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.537983 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-serving-cert\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.558196 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lw4q\" (UniqueName: \"kubernetes.io/projected/bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0-kube-api-access-6lw4q\") pod \"console-operator-67c89758df-4rfkh\" (UID: \"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0\") " pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.569485 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerDied","Data":"8cfe21d64c3f3bd883b20a431e0d46df831383bde83f5c38c04c36b9c506f63b"} Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.569549 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.573307 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.574532 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.581153 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.581735 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629497 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629720 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629820 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4022194a-f5e9-494f-b079-ddd414c3da50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.629940 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lskwx\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-kube-api-access-lskwx\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630058 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630171 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630421 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.630492 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4022194a-f5e9-494f-b079-ddd414c3da50-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.632335 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.638833 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.639327 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4022194a-f5e9-494f-b079-ddd414c3da50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.644075 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4022194a-f5e9-494f-b079-ddd414c3da50-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.649974 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lskwx\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-kube-api-access-lskwx\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.654418 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4022194a-f5e9-494f-b079-ddd414c3da50-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.658637 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4022194a-f5e9-494f-b079-ddd414c3da50-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7hhqr\" (UID: \"4022194a-f5e9-494f-b079-ddd414c3da50\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.675495 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.731405 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.731568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.732101 5103 configmap.go:193] Couldn't get configMap openshift-image-registry/serviceca: object "openshift-image-registry"/"serviceca" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.732149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca podName:c5938973-a6f9-4d60-b605-3f02b2c1c84f nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.232133977 +0000 UTC m=+112.103632029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serviceca" (UniqueName: "kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca") pod "image-pruner-29495520-x6t57" (UID: "c5938973-a6f9-4d60-b605-3f02b2c1c84f") : object "openshift-image-registry"/"serviceca" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.754735 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.763871 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-clmhf"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.763925 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-j77tr"] Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.764000 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.767306 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.767328 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.832765 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.832840 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.832964 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.833035 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22187967-c3cb-4aec-b6d5-65c7c6167554-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: W0130 00:12:01.882779 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb3c35d_63fc_4a35_91ea_ef0e217fc5d0.slice/crio-0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482 WatchSource:0}: Error finding container 0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482: Status 404 returned error can't find the container with id 0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482 Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.923800 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934171 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22187967-c3cb-4aec-b6d5-65c7c6167554-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934208 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934236 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.934301 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935158 5103 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935268 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert podName:22187967-c3cb-4aec-b6d5-65c7c6167554 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.435247168 +0000 UTC m=+112.306745221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert") pod "openshift-controller-manager-operator-686468bdd5-8qhdx" (UID: "22187967-c3cb-4aec-b6d5-65c7c6167554") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: I0130 00:12:01.935456 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22187967-c3cb-4aec-b6d5-65c7c6167554-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935158 5103 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.935507 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config podName:22187967-c3cb-4aec-b6d5-65c7c6167554 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.435498695 +0000 UTC m=+112.306996747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config") pod "openshift-controller-manager-operator-686468bdd5-8qhdx" (UID: "22187967-c3cb-4aec-b6d5-65c7c6167554") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947270 5103 projected.go:289] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947782 5103 projected.go:289] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947796 5103 projected.go:194] Error preparing data for projected volume kube-api-access-x5qk2 for pod openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:12:01 crc kubenswrapper[5103]: E0130 00:12:01.947891 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2 podName:22187967-c3cb-4aec-b6d5-65c7c6167554 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:02.447868415 +0000 UTC m=+112.319366467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5qk2" (UniqueName: "kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2") pod "openshift-controller-manager-operator-686468bdd5-8qhdx" (UID: "22187967-c3cb-4aec-b6d5-65c7c6167554") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Jan 30 00:12:02 crc kubenswrapper[5103]: W0130 00:12:02.087031 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4022194a_f5e9_494f_b079_ddd414c3da50.slice/crio-52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6 WatchSource:0}: Error finding container 52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6: Status 404 returned error can't find the container with id 52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6 Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.238749 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.239848 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"image-pruner-29495520-x6t57\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.374957 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b"] Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.375254 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376235 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376429 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376567 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.376704 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.377032 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.382145 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.382431 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.382831 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.383689 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.383957 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384304 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384428 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384522 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384724 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384839 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.384939 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385029 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385273 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385288 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.385679 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.440892 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.441005 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.441030 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/5f40ccbb-715c-4854-b28f-ab8055375c91-kube-api-access-jj26b\") pod \"downloads-747b44746d-j77tr\" (UID: \"5f40ccbb-715c-4854-b28f-ab8055375c91\") " pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.442952 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22187967-c3cb-4aec-b6d5-65c7c6167554-config\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.447087 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22187967-c3cb-4aec-b6d5-65c7c6167554-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.542581 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.543118 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/5f40ccbb-715c-4854-b28f-ab8055375c91-kube-api-access-jj26b\") pod \"downloads-747b44746d-j77tr\" (UID: \"5f40ccbb-715c-4854-b28f-ab8055375c91\") " pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.574961 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5qk2\" (UniqueName: \"kubernetes.io/projected/22187967-c3cb-4aec-b6d5-65c7c6167554-kube-api-access-x5qk2\") pod \"openshift-controller-manager-operator-686468bdd5-8qhdx\" (UID: \"22187967-c3cb-4aec-b6d5-65c7c6167554\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.581375 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj26b\" (UniqueName: \"kubernetes.io/projected/5f40ccbb-715c-4854-b28f-ab8055375c91-kube-api-access-jj26b\") pod \"downloads-747b44746d-j77tr\" (UID: \"5f40ccbb-715c-4854-b28f-ab8055375c91\") " pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: W0130 00:12:02.635012 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5938973_a6f9_4d60_b605_3f02b2c1c84f.slice/crio-f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0 WatchSource:0}: Error finding container f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0: Status 404 returned error can't find the container with id f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0 Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.666814 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerStarted","Data":"9131b9500cdfd415e7ec77b417734cc2ba2d9446de26cd67b54fba245814badb"} Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.666884 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc"] Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.666986 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.669484 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.669775 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.670019 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.670422 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.670839 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.745929 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-config\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.745987 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.746019 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxq6v\" (UniqueName: \"kubernetes.io/projected/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-kube-api-access-mxq6v\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.773772 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.779900 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.847366 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-config\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.847427 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.847458 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxq6v\" (UniqueName: \"kubernetes.io/projected/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-kube-api-access-mxq6v\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.849255 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-config\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.865014 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.873450 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxq6v\" (UniqueName: \"kubernetes.io/projected/8ee6bca0-0d30-4653-b2a4-a79ebde1fed9-kube-api-access-mxq6v\") pod \"openshift-apiserver-operator-846cbfc458-jpc9b\" (UID: \"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.894230 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.897028 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.903507 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.904351 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.912973 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerStarted","Data":"a0a8569837d450b0258dafe39d145a428bb48817a83228c57446c186695e2e5c"} Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.913017 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" event={"ID":"f80439cc-c38d-4210-a203-f478704d9dcd","Type":"ContainerStarted","Data":"9a5cd267c6e2d0a20dd4f22ec274fd163a4524dbbd1f722646c7daaf7c0264df"} Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.913031 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4ltx6"] Jan 30 00:12:02 crc kubenswrapper[5103]: I0130 00:12:02.988697 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052705 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052749 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052782 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82mxx\" (UniqueName: \"kubernetes.io/projected/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-kube-api-access-82mxx\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.052808 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-images\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.087116 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.087339 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.089728 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.089875 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123429 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" event={"ID":"f80439cc-c38d-4210-a203-f478704d9dcd","Type":"ContainerStarted","Data":"6eb6a3b8b96fafcdc3da9bddd43f830e446a37a40daab0ee1333d5204dfecefe"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123494 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" event={"ID":"a0ff7eb1-7b00-4318-936e-30862acd97e5","Type":"ContainerStarted","Data":"dfa2d5328b163a06e0784ef6748b897dd97edce0f633750bb32fdbd9501d39e5"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123517 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.123751 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129283 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129401 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129594 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.129724 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147406 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerStarted","Data":"f01ae49c3dbf6ce1c41262f39b1cfb6c8326085cddd7aa8f645756c56fc66e24"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147459 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerStarted","Data":"c80a2cc41703a4137b5b54d52cddf220a4c7bc6710518ed255865caec779f53a"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147482 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" event={"ID":"4022194a-f5e9-494f-b079-ddd414c3da50","Type":"ContainerStarted","Data":"52800e681b19c425661b7acb6844d74c5edebe5887b2651595561939539241d6"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147506 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" event={"ID":"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0","Type":"ContainerStarted","Data":"0db1d5950bc0b6f804c3674ce2fba82afbbed1a38d14ee614c6321e7221e4482"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147506 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.147526 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5tp7b"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.149461 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.149836 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.150088 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154522 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrjh\" (UniqueName: \"kubernetes.io/projected/c3bbecd7-5e60-4290-bc24-b4f292d0d515-kube-api-access-gxrjh\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154643 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154683 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154767 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82mxx\" (UniqueName: \"kubernetes.io/projected/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-kube-api-access-82mxx\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154832 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-images\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.154891 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3bbecd7-5e60-4290-bc24-b4f292d0d515-webhook-certs\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.155694 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.155886 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-images\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.163041 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.173142 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82mxx\" (UniqueName: \"kubernetes.io/projected/aa3b2afd-f2d4-40f0-bbd3-19225d26438e-kube-api-access-82mxx\") pod \"machine-config-operator-67c9d58cbb-dxgsc\" (UID: \"aa3b2afd-f2d4-40f0-bbd3-19225d26438e\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.196578 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.196762 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.201636 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.201888 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202074 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202124 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202154 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.202739 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.255956 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkh5s\" (UniqueName: \"kubernetes.io/projected/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-kube-api-access-mkh5s\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.256960 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3bbecd7-5e60-4290-bc24-b4f292d0d515-webhook-certs\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.256267 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257033 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-apiservice-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257284 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4fl\" (UniqueName: \"kubernetes.io/projected/ca146169-65b5-4eed-be41-43bb8bf87656-kube-api-access-ds4fl\") pod \"migrator-866fcbc849-6mbbh\" (UID: \"ca146169-65b5-4eed-be41-43bb8bf87656\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257317 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-tmpfs\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257349 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-webhook-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.257467 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxrjh\" (UniqueName: \"kubernetes.io/projected/c3bbecd7-5e60-4290-bc24-b4f292d0d515-kube-api-access-gxrjh\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.261406 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c3bbecd7-5e60-4290-bc24-b4f292d0d515-webhook-certs\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: W0130 00:12:03.275412 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee6bca0_0d30_4653_b2a4_a79ebde1fed9.slice/crio-d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858 WatchSource:0}: Error finding container d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858: Status 404 returned error can't find the container with id d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858 Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.287607 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxrjh\" (UniqueName: \"kubernetes.io/projected/c3bbecd7-5e60-4290-bc24-b4f292d0d515-kube-api-access-gxrjh\") pod \"multus-admission-controller-69db94689b-4ltx6\" (UID: \"c3bbecd7-5e60-4290-bc24-b4f292d0d515\") " pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.344200 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.374720 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3b3db2b-ab99-483b-a13c-4947269bc330-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.374783 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4fl\" (UniqueName: \"kubernetes.io/projected/ca146169-65b5-4eed-be41-43bb8bf87656-kube-api-access-ds4fl\") pod \"migrator-866fcbc849-6mbbh\" (UID: \"ca146169-65b5-4eed-be41-43bb8bf87656\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.374874 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-tmpfs\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375228 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-webhook-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375414 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-config\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375461 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw55p\" (UniqueName: \"kubernetes.io/projected/f3b3db2b-ab99-483b-a13c-4947269bc330-kube-api-access-fw55p\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375528 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-tmpfs\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375540 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkh5s\" (UniqueName: \"kubernetes.io/projected/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-kube-api-access-mkh5s\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375619 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-images\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.375670 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-apiservice-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.380149 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-webhook-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.384020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-apiservice-cert\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.391931 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4fl\" (UniqueName: \"kubernetes.io/projected/ca146169-65b5-4eed-be41-43bb8bf87656-kube-api-access-ds4fl\") pod \"migrator-866fcbc849-6mbbh\" (UID: \"ca146169-65b5-4eed-be41-43bb8bf87656\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.392689 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkh5s\" (UniqueName: \"kubernetes.io/projected/fcede4f0-4721-47c1-bc52-b68bf7ad29d4-kube-api-access-mkh5s\") pod \"packageserver-7d4fc7d867-6v8cn\" (UID: \"fcede4f0-4721-47c1-bc52-b68bf7ad29d4\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.423547 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.439479 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.439686 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.439907 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.445802 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.445918 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446110 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446283 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446440 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446748 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446830 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446875 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.446967 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.447084 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.447751 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.447937 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.449481 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.476612 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fw55p\" (UniqueName: \"kubernetes.io/projected/f3b3db2b-ab99-483b-a13c-4947269bc330-kube-api-access-fw55p\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.476652 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23500895-f472-4de5-afda-f1cc02807ceb-tmp-dir\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.476687 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89jd\" (UniqueName: \"kubernetes.io/projected/23500895-f472-4de5-afda-f1cc02807ceb-kube-api-access-s89jd\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477124 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-images\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477159 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-etcd-client\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477360 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3b3db2b-ab99-483b-a13c-4947269bc330-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477417 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477453 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-config\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477480 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-config\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477514 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.477539 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-serving-cert\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.478318 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-images\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.480612 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b3db2b-ab99-483b-a13c-4947269bc330-config\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.480711 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3b3db2b-ab99-483b-a13c-4947269bc330-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.493231 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw55p\" (UniqueName: \"kubernetes.io/projected/f3b3db2b-ab99-483b-a13c-4947269bc330-kube-api-access-fw55p\") pod \"machine-api-operator-755bb95488-5tp7b\" (UID: \"f3b3db2b-ab99-483b-a13c-4947269bc330\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.523918 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-7v6vx"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.524022 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.526179 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.528842 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.529305 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.529459 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.532741 5103 generic.go:358] "Generic (PLEG): container finished" podID="e9100695-b78d-4b2f-9cea-9d022064c792" containerID="911cfd942b49cf6ceaca0342397db4702338409b8ea3eddfbf7731f2ad3b5a53" exitCode=0 Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.549143 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.549258 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.563206 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.563250 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.563482 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.568639 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.568958 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.569145 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.569462 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.569635 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.570645 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.574762 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579271 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s89jd\" (UniqueName: \"kubernetes.io/projected/23500895-f472-4de5-afda-f1cc02807ceb-kube-api-access-s89jd\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579347 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-etcd-client\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579394 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579450 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579473 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579507 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-config\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579531 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5vl7\" (UniqueName: \"kubernetes.io/projected/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-kube-api-access-g5vl7\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579570 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579651 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-serving-cert\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.579707 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23500895-f472-4de5-afda-f1cc02807ceb-tmp-dir\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.582719 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-service-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.582945 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-config\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.583268 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23500895-f472-4de5-afda-f1cc02807ceb-etcd-ca\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.586941 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23500895-f472-4de5-afda-f1cc02807ceb-tmp-dir\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.591271 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-etcd-client\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.591825 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23500895-f472-4de5-afda-f1cc02807ceb-serving-cert\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.600410 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89jd\" (UniqueName: \"kubernetes.io/projected/23500895-f472-4de5-afda-f1cc02807ceb-kube-api-access-s89jd\") pod \"etcd-operator-69b85846b6-v2hgb\" (UID: \"23500895-f472-4de5-afda-f1cc02807ceb\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619693 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerStarted","Data":"f0078a0ce155b37c23086d472f5f677a2cdb7136a582b7aeb8db53e9394aa660"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619743 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerStarted","Data":"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619758 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerDied","Data":"911cfd942b49cf6ceaca0342397db4702338409b8ea3eddfbf7731f2ad3b5a53"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619774 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" event={"ID":"22187967-c3cb-4aec-b6d5-65c7c6167554","Type":"ContainerStarted","Data":"beafbe80e56c1ea1eef4b374e8294a506eec237632db812c7cf796d7effbab33"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.619788 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"9c8f4b52155e3ae7036283a61c621da7d9510d4baa4a6376d7850ec6f82cd529"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620017 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" event={"ID":"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9","Type":"ContainerStarted","Data":"d2250273cf500f9bfbca6370eaf5ca7b122825db77e6dec30e391d7de2e7b858"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620039 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerStarted","Data":"f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620070 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" event={"ID":"2ed60012-d4e8-45fd-b124-fe7d6ca49ca0","Type":"ContainerStarted","Data":"7e56449ddcc6bfdcfae161b44edb397e26d63a11513d624eed0735d0abe80820"} Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.620088 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.622342 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.622771 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podStartSLOduration=90.622706965 podStartE2EDuration="1m30.622706965s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:03.583883552 +0000 UTC m=+113.455381644" watchObservedRunningTime="2026-01-30 00:12:03.622706965 +0000 UTC m=+113.494205037" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.625794 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.626387 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.627263 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.627470 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.650589 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.650766 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.654935 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.654987 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.681290 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b196a79-ecff-4ec8-8338-33436cfd3dcc-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.682968 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9n6v\" (UniqueName: \"kubernetes.io/projected/9bef77c6-141b-4cff-a91d-7515860a6a2a-kube-api-access-r9n6v\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.683108 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.683268 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.684308 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.686298 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.687931 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-oauth-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688013 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688110 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688140 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-oauth-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688144 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688176 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-service-ca\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688193 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-trusted-ca-bundle\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688218 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g5vl7\" (UniqueName: \"kubernetes.io/projected/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-kube-api-access-g5vl7\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.688601 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59m9p\" (UniqueName: \"kubernetes.io/projected/4b196a79-ecff-4ec8-8338-33436cfd3dcc-kube-api-access-59m9p\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.692391 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.693347 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.693826 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.702172 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.709438 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5vl7\" (UniqueName: \"kubernetes.io/projected/b9b4e43e-61bd-46ba-a825-e0bca8c8da4e-kube-api-access-g5vl7\") pod \"kube-storage-version-migrator-operator-565b79b866-zr8xm\" (UID: \"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.709688 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.709867 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.713309 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.713522 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.713766 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.714974 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.724197 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.743460 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-6z46s"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.743584 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.743511 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.755076 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.755186 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.755653 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.756125 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.757552 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.762303 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.762338 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.764571 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.766636 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.771201 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.778119 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.779110 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.779526 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.781641 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791353 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9n6v\" (UniqueName: \"kubernetes.io/projected/9bef77c6-141b-4cff-a91d-7515860a6a2a-kube-api-access-r9n6v\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791441 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791604 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4shg\" (UniqueName: \"kubernetes.io/projected/35998b47-ed37-4a50-9553-18147918d9cb-kube-api-access-c4shg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791645 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791790 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791857 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.791902 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.792227 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/35998b47-ed37-4a50-9553-18147918d9cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: E0130 00:12:03.792396 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.292376484 +0000 UTC m=+114.163874536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.793373 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.794309 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.794846 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc8aa23-eb1a-486e-9462-499486335cdc-config\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.794887 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8dc8aa23-eb1a-486e-9462-499486335cdc-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795003 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795030 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795117 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-oauth-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795180 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795203 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc8aa23-eb1a-486e-9462-499486335cdc-serving-cert\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795264 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-oauth-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795285 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dc8aa23-eb1a-486e-9462-499486335cdc-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795345 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-service-ca\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795368 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-trusted-ca-bundle\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795414 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795471 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59m9p\" (UniqueName: \"kubernetes.io/projected/4b196a79-ecff-4ec8-8338-33436cfd3dcc-kube-api-access-59m9p\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795498 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.795562 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b196a79-ecff-4ec8-8338-33436cfd3dcc-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.796110 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-oauth-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.796638 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.796900 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-service-ca\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798101 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bef77c6-141b-4cff-a91d-7515860a6a2a-trusted-ca-bundle\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798112 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798265 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798406 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.798601 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.802821 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-serving-cert\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.804736 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bef77c6-141b-4cff-a91d-7515860a6a2a-console-oauth-config\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.807877 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b196a79-ecff-4ec8-8338-33436cfd3dcc-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.842030 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9n6v\" (UniqueName: \"kubernetes.io/projected/9bef77c6-141b-4cff-a91d-7515860a6a2a-kube-api-access-r9n6v\") pod \"console-64d44f6ddf-7v6vx\" (UID: \"9bef77c6-141b-4cff-a91d-7515860a6a2a\") " pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.851062 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59m9p\" (UniqueName: \"kubernetes.io/projected/4b196a79-ecff-4ec8-8338-33436cfd3dcc-kube-api-access-59m9p\") pod \"cluster-samples-operator-6b564684c8-6w9kf\" (UID: \"4b196a79-ecff-4ec8-8338-33436cfd3dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.858951 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.860775 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.862071 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896339 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896461 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-config\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896508 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc8aa23-eb1a-486e-9462-499486335cdc-serving-cert\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896528 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dc8aa23-eb1a-486e-9462-499486335cdc-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896551 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8550f022-16a5-4fac-a94e-fc322ee0cb9d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896568 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9dfcfad-0e85-4b3e-9a33-3729f7033251-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896587 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896605 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896624 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896641 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896659 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btzr\" (UniqueName: \"kubernetes.io/projected/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-kube-api-access-7btzr\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896675 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dfcfad-0e85-4b3e-9a33-3729f7033251-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896695 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9dfcfad-0e85-4b3e-9a33-3729f7033251-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896716 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vvc\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-kube-api-access-s9vvc\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896732 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c9dfcfad-0e85-4b3e-9a33-3729f7033251-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896756 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896778 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72531653-f2c6-4754-8209-24104364d6f4-serving-cert\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896794 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42f46e1b-e6a2-499c-9e01-fe08785a78a4-tmpfs\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896819 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/35998b47-ed37-4a50-9553-18147918d9cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896869 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8dc8aa23-eb1a-486e-9462-499486335cdc-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896914 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896932 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896952 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896972 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-srv-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.896996 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8550f022-16a5-4fac-a94e-fc322ee0cb9d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897025 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897079 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dff8\" (UniqueName: \"kubernetes.io/projected/42f46e1b-e6a2-499c-9e01-fe08785a78a4-kube-api-access-4dff8\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897099 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897123 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkm5p\" (UniqueName: \"kubernetes.io/projected/72531653-f2c6-4754-8209-24104364d6f4-kube-api-access-wkm5p\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897142 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897226 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c4shg\" (UniqueName: \"kubernetes.io/projected/35998b47-ed37-4a50-9553-18147918d9cb-kube-api-access-c4shg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.897343 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc8aa23-eb1a-486e-9462-499486335cdc-config\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.898682 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: E0130 00:12:03.899425 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.399391962 +0000 UTC m=+114.270890014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.899584 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.902293 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.909335 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc8aa23-eb1a-486e-9462-499486335cdc-config\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.913366 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8dc8aa23-eb1a-486e-9462-499486335cdc-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.914351 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-dtdff"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.914795 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.915747 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.916677 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.917632 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.917999 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc8aa23-eb1a-486e-9462-499486335cdc-serving-cert\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.918852 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.919367 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.919456 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.919836 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/35998b47-ed37-4a50-9553-18147918d9cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.921307 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dc8aa23-eb1a-486e-9462-499486335cdc-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jsk5t\" (UID: \"8dc8aa23-eb1a-486e-9462-499486335cdc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.922740 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4shg\" (UniqueName: \"kubernetes.io/projected/35998b47-ed37-4a50-9553-18147918d9cb-kube-api-access-c4shg\") pod \"control-plane-machine-set-operator-75ffdb6fcd-94r9t\" (UID: \"35998b47-ed37-4a50-9553-18147918d9cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.923100 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.924251 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.925362 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.975802 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz"] Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.976834 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.982985 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:12:03 crc kubenswrapper[5103]: I0130 00:12:03.994351 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001646 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4dff8\" (UniqueName: \"kubernetes.io/projected/42f46e1b-e6a2-499c-9e01-fe08785a78a4-kube-api-access-4dff8\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001691 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wkm5p\" (UniqueName: \"kubernetes.io/projected/72531653-f2c6-4754-8209-24104364d6f4-kube-api-access-wkm5p\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001759 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-config\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001813 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8550f022-16a5-4fac-a94e-fc322ee0cb9d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9dfcfad-0e85-4b3e-9a33-3729f7033251-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001848 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001876 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001917 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7btzr\" (UniqueName: \"kubernetes.io/projected/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-kube-api-access-7btzr\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001945 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dfcfad-0e85-4b3e-9a33-3729f7033251-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001965 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9dfcfad-0e85-4b3e-9a33-3729f7033251-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.001989 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vvc\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-kube-api-access-s9vvc\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c9dfcfad-0e85-4b3e-9a33-3729f7033251-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002085 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002102 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72531653-f2c6-4754-8209-24104364d6f4-serving-cert\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002122 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42f46e1b-e6a2-499c-9e01-fe08785a78a4-tmpfs\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002163 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002540 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002676 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-srv-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.002761 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8550f022-16a5-4fac-a94e-fc322ee0cb9d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.003385 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c9dfcfad-0e85-4b3e-9a33-3729f7033251-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.004223 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dfcfad-0e85-4b3e-9a33-3729f7033251-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.005404 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.006058 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.007616 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.009309 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9dfcfad-0e85-4b3e-9a33-3729f7033251-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.009806 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.009962 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.509940056 +0000 UTC m=+114.381438178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.010404 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42f46e1b-e6a2-499c-9e01-fe08785a78a4-tmpfs\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.011781 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-config\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.012548 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72531653-f2c6-4754-8209-24104364d6f4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.015212 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-srv-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.015869 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.016132 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42f46e1b-e6a2-499c-9e01-fe08785a78a4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.031935 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8550f022-16a5-4fac-a94e-fc322ee0cb9d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.034989 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8550f022-16a5-4fac-a94e-fc322ee0cb9d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.036653 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.036696 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.042275 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72531653-f2c6-4754-8209-24104364d6f4-serving-cert\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.050433 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.052941 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.061091 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.062646 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.066800 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.086008 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.094153 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.106609 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.112281 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.111824 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btzr\" (UniqueName: \"kubernetes.io/projected/3ed247cb-77c1-47fb-ad58-f14f03aae2f2-kube-api-access-7btzr\") pod \"package-server-manager-77f986bd66-mv94c\" (UID: \"3ed247cb-77c1-47fb-ad58-f14f03aae2f2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.112533 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vpwh\" (UniqueName: \"kubernetes.io/projected/f1c445e1-3a33-419a-bd9a-0314b23539f7-kube-api-access-7vpwh\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.112696 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02410934-0df2-4e17-9042-91fa47becda6-signing-cabundle\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.113966 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.61394155 +0000 UTC m=+114.485439602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.114331 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c445e1-3a33-419a-bd9a-0314b23539f7-config\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.114536 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhvm\" (UniqueName: \"kubernetes.io/projected/02410934-0df2-4e17-9042-91fa47becda6-kube-api-access-wqhvm\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.114628 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c445e1-3a33-419a-bd9a-0314b23539f7-serving-cert\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.115382 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02410934-0df2-4e17-9042-91fa47becda6-signing-key\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.115666 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.116066 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.616031351 +0000 UTC m=+114.487529403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.129020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.160298 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9dfcfad-0e85-4b3e-9a33-3729f7033251-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fqwng\" (UID: \"c9dfcfad-0e85-4b3e-9a33-3729f7033251\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.176227 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vvc\" (UniqueName: \"kubernetes.io/projected/8550f022-16a5-4fac-a94e-fc322ee0cb9d-kube-api-access-s9vvc\") pod \"ingress-operator-6b9cb4dbcf-knxwb\" (UID: \"8550f022-16a5-4fac-a94e-fc322ee0cb9d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.187328 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.192444 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcede4f0_4721_47c1_bc52_b68bf7ad29d4.slice/crio-7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239 WatchSource:0}: Error finding container 7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239: Status 404 returned error can't find the container with id 7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.193460 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.196200 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkm5p\" (UniqueName: \"kubernetes.io/projected/72531653-f2c6-4754-8209-24104364d6f4-kube-api-access-wkm5p\") pod \"authentication-operator-7f5c659b84-rkb6j\" (UID: \"72531653-f2c6-4754-8209-24104364d6f4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.200952 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.202551 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.210023 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217235 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217382 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02410934-0df2-4e17-9042-91fa47becda6-signing-key\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217413 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217437 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a3a441e4-5ade-4309-938a-0f4fe130a721-tmpfs\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217455 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217548 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217636 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vpwh\" (UniqueName: \"kubernetes.io/projected/f1c445e1-3a33-419a-bd9a-0314b23539f7-kube-api-access-7vpwh\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217754 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02410934-0df2-4e17-9042-91fa47becda6-signing-cabundle\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217777 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217808 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c445e1-3a33-419a-bd9a-0314b23539f7-config\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217850 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9pn\" (UniqueName: \"kubernetes.io/projected/a3a441e4-5ade-4309-938a-0f4fe130a721-kube-api-access-9w9pn\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217877 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhvm\" (UniqueName: \"kubernetes.io/projected/02410934-0df2-4e17-9042-91fa47becda6-kube-api-access-wqhvm\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.217892 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c445e1-3a33-419a-bd9a-0314b23539f7-serving-cert\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.218138 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.71812023 +0000 UTC m=+114.589618272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.219570 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/02410934-0df2-4e17-9042-91fa47becda6-signing-cabundle\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.219933 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c445e1-3a33-419a-bd9a-0314b23539f7-config\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.221613 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-srv-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.221775 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dff8\" (UniqueName: \"kubernetes.io/projected/42f46e1b-e6a2-499c-9e01-fe08785a78a4-kube-api-access-4dff8\") pod \"catalog-operator-75ff9f647d-cw4vd\" (UID: \"42f46e1b-e6a2-499c-9e01-fe08785a78a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.226167 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3b3db2b_ab99_483b_a13c_4947269bc330.slice/crio-64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b WatchSource:0}: Error finding container 64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b: Status 404 returned error can't find the container with id 64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.234617 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/02410934-0df2-4e17-9042-91fa47becda6-signing-key\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.239405 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c445e1-3a33-419a-bd9a-0314b23539f7-serving-cert\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.252295 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.273759 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.274155 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.277246 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322848 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-srv-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322930 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322953 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322972 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a3a441e4-5ade-4309-938a-0f4fe130a721-tmpfs\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.322990 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.323018 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.323076 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.323110 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9w9pn\" (UniqueName: \"kubernetes.io/projected/a3a441e4-5ade-4309-938a-0f4fe130a721-kube-api-access-9w9pn\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.325496 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.325424 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a3a441e4-5ade-4309-938a-0f4fe130a721-tmpfs\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.326344 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.826329277 +0000 UTC m=+114.697827319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.329933 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.330683 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vpwh\" (UniqueName: \"kubernetes.io/projected/f1c445e1-3a33-419a-bd9a-0314b23539f7-kube-api-access-7vpwh\") pod \"service-ca-operator-5b9c976747-dg5nm\" (UID: \"f1c445e1-3a33-419a-bd9a-0314b23539f7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.336192 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-srv-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.342551 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a3a441e4-5ade-4309-938a-0f4fe130a721-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.348230 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23500895_f472_4de5_afda_f1cc02807ceb.slice/crio-d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58 WatchSource:0}: Error finding container d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58: Status 404 returned error can't find the container with id d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.358801 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhvm\" (UniqueName: \"kubernetes.io/projected/02410934-0df2-4e17-9042-91fa47becda6-kube-api-access-wqhvm\") pod \"service-ca-74545575db-dtdff\" (UID: \"02410934-0df2-4e17-9042-91fa47becda6\") " pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.369275 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.390951 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427098 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427537 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427571 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427590 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427617 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427687 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/179763d8-8dea-40e5-ba89-1a848fbf519a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427815 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whntj\" (UniqueName: \"kubernetes.io/projected/179763d8-8dea-40e5-ba89-1a848fbf519a-kube-api-access-whntj\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.427902 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/179763d8-8dea-40e5-ba89-1a848fbf519a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.428363 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:04.928348333 +0000 UTC m=+114.799846375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.431286 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.439281 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.453361 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w9pn\" (UniqueName: \"kubernetes.io/projected/a3a441e4-5ade-4309-938a-0f4fe130a721-kube-api-access-9w9pn\") pod \"olm-operator-5cdf44d969-kg2rz\" (UID: \"a3a441e4-5ade-4309-938a-0f4fe130a721\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.475912 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.481751 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"collect-profiles-29495520-hdxqw\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530036 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530156 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530184 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530200 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530224 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530253 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/179763d8-8dea-40e5-ba89-1a848fbf519a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530324 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whntj\" (UniqueName: \"kubernetes.io/projected/179763d8-8dea-40e5-ba89-1a848fbf519a-kube-api-access-whntj\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.530365 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/179763d8-8dea-40e5-ba89-1a848fbf519a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.531564 5103 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.531669 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config podName:0ebc9fa5-f75b-4468-b4b8-83695dd067b6 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.031650781 +0000 UTC m=+114.903148833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config") pod "kube-controller-manager-operator-69d5f845f8-w4q8t" (UID: "0ebc9fa5-f75b-4468-b4b8-83695dd067b6") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.532479 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.032470391 +0000 UTC m=+114.903968443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.533713 5103 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.533790 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert podName:0ebc9fa5-f75b-4468-b4b8-83695dd067b6 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.033772863 +0000 UTC m=+114.905270985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert") pod "kube-controller-manager-operator-69d5f845f8-w4q8t" (UID: "0ebc9fa5-f75b-4468-b4b8-83695dd067b6") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.536626 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/179763d8-8dea-40e5-ba89-1a848fbf519a-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.537178 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.567098 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.567159 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/179763d8-8dea-40e5-ba89-1a848fbf519a-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.577707 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-dtdff" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.578177 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whntj\" (UniqueName: \"kubernetes.io/projected/179763d8-8dea-40e5-ba89-1a848fbf519a-kube-api-access-whntj\") pod \"machine-config-controller-f9cdd68f7-nc9m9\" (UID: \"179763d8-8dea-40e5-ba89-1a848fbf519a\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.586371 5103 projected.go:289] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.586404 5103 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.586478 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access podName:0ebc9fa5-f75b-4468-b4b8-83695dd067b6 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.086455522 +0000 UTC m=+114.957953574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access") pod "kube-controller-manager-operator-69d5f845f8-w4q8t" (UID: "0ebc9fa5-f75b-4468-b4b8-83695dd067b6") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.601934 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dc8aa23_eb1a_486e_9462_499486335cdc.slice/crio-408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007 WatchSource:0}: Error finding container 408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007: Status 404 returned error can't find the container with id 408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.631241 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.631610 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.131589938 +0000 UTC m=+115.003087990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.655662 5103 generic.go:358] "Generic (PLEG): container finished" podID="91703ab7-2f05-4831-8200-85210adf830b" containerID="f0078a0ce155b37c23086d472f5f677a2cdb7136a582b7aeb8db53e9394aa660" exitCode=0 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.692539 5103 generic.go:358] "Generic (PLEG): container finished" podID="a0ff7eb1-7b00-4318-936e-30862acd97e5" containerID="794ade07b1fe5623465f764c5eaf8d3c479eeb7e9a2066ff11ca2f40c30e5324" exitCode=0 Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.716551 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.734036 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.734413 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.234397544 +0000 UTC m=+115.105895596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.744579 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-n8bvp"] Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.747954 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.757889 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.758094 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.758349 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.758562 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.780125 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.804953 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836402 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836809 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836856 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836938 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.836960 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.837092 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.337076136 +0000 UTC m=+115.208574188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943193 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943609 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943658 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943779 5103 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943878 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.943763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943886 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.443859129 +0000 UTC m=+115.315357181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943890 5103 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.943975 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.443964021 +0000 UTC m=+115.315462073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.944181 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.444172766 +0000 UTC m=+115.315670818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:04 crc kubenswrapper[5103]: I0130 00:12:04.944264 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958407 5103 projected.go:289] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958536 5103 projected.go:289] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958558 5103 projected.go:194] Error preparing data for projected volume kube-api-access-6bzkw for pod openshift-marketplace/marketplace-operator-547dbd544d-mf247: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5103]: E0130 00:12:04.958655 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.458631727 +0000 UTC m=+115.330129779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bzkw" (UniqueName: "kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:04 crc kubenswrapper[5103]: W0130 00:12:04.970178 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c445e1_3a33_419a_bd9a_0314b23539f7.slice/crio-27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb WatchSource:0}: Error finding container 27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb: Status 404 returned error can't find the container with id 27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.045361 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.045638 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.545580418 +0000 UTC m=+115.417078480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.046281 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.046481 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.046630 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.546616243 +0000 UTC m=+115.418114305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.046764 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.048137 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-config\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.054106 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.147694 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.147829 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.64780202 +0000 UTC m=+115.519300102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.148072 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.148363 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.148884 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.648867526 +0000 UTC m=+115.520365618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.154849 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ebc9fa5-f75b-4468-b4b8-83695dd067b6-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-w4q8t\" (UID: \"0ebc9fa5-f75b-4468-b4b8-83695dd067b6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: W0130 00:12:05.216706 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3a441e4_5ade_4309_938a_0f4fe130a721.slice/crio-3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662 WatchSource:0}: Error finding container 3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662: Status 404 returned error can't find the container with id 3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662 Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.249515 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.249725 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.749678893 +0000 UTC m=+115.621176985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.250350 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.250815 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.75079067 +0000 UTC m=+115.622288752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.352256 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.352464 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.852431018 +0000 UTC m=+115.723929080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.352965 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.353341 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.853328519 +0000 UTC m=+115.724826581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.377345 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.455963 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.456323 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.456391 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457002 5103 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457198 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.457119119 +0000 UTC m=+116.328617211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457328 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:05.957304264 +0000 UTC m=+115.828802376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457407 5103 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.457544 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.457454277 +0000 UTC m=+116.328952379 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.558523 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.558637 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558658 5103 projected.go:289] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558680 5103 projected.go:289] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558691 5103 projected.go:194] Error preparing data for projected volume kube-api-access-6bzkw for pod openshift-marketplace/marketplace-operator-547dbd544d-mf247: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558754 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw podName:b15f695a-0fc1-4ab5-aad2-341f3bf6822d nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.558735486 +0000 UTC m=+116.430233538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bzkw" (UniqueName: "kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw") pod "marketplace-operator-547dbd544d-mf247" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.558936 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.058925891 +0000 UTC m=+115.930423943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: W0130 00:12:05.603439 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ebc9fa5_f75b_4468_b4b8_83695dd067b6.slice/crio-737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be WatchSource:0}: Error finding container 737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be: Status 404 returned error can't find the container with id 737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.659465 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.659587 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.159565854 +0000 UTC m=+116.031063916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.659904 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.661029 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.161003789 +0000 UTC m=+116.032501851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.762550 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.762666 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.262642006 +0000 UTC m=+116.134140068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.762805 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.763202 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.26318783 +0000 UTC m=+116.134685912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.864033 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.864244 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.364214202 +0000 UTC m=+116.235712284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.864612 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.865149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.365126024 +0000 UTC m=+116.236624086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.966573 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.966800 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.466769072 +0000 UTC m=+116.338267134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:05 crc kubenswrapper[5103]: I0130 00:12:05.967925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:05 crc kubenswrapper[5103]: E0130 00:12:05.968341 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.46832795 +0000 UTC m=+116.339826012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.069116 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.069355 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.569309902 +0000 UTC m=+116.440807994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.069663 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.070188 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.570169472 +0000 UTC m=+116.441667554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.171120 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.671016511 +0000 UTC m=+116.542514603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.170898 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.171853 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.172311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.672292942 +0000 UTC m=+116.543791034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.276073 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.276238 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.776214505 +0000 UTC m=+116.647712577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.276338 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.276747 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.776735897 +0000 UTC m=+116.648233959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.305553 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.305815 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.309801 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.310759 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.310965 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.311101 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.312602 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.313166 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.313688 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.316429 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gdlhx"] Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.317064 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" podStartSLOduration=93.317034386 podStartE2EDuration="1m33.317034386s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:05.261309585 +0000 UTC m=+115.132807717" watchObservedRunningTime="2026-01-30 00:12:06.317034386 +0000 UTC m=+116.188532448" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.323956 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379000 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.379250 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.879211205 +0000 UTC m=+116.750709277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379364 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-certs\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379644 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r822\" (UniqueName: \"kubernetes.io/projected/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-kube-api-access-7r822\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379819 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.379863 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-node-bootstrap-token\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.380298 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.880281961 +0000 UTC m=+116.751780023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423512 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423589 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423772 5103 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-7csdm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.423829 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.482222 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483253 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483315 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483405 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7r822\" (UniqueName: \"kubernetes.io/projected/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-kube-api-access-7r822\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483592 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-node-bootstrap-token\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.483740 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-certs\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.485097 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:06.985029753 +0000 UTC m=+116.856527815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.491893 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.503563 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-node-bootstrap-token\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.506229 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r822\" (UniqueName: \"kubernetes.io/projected/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-kube-api-access-7r822\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.507952 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.516422 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2b7c825f-c092-4d5b-9a1d-be16df92e5a2-certs\") pod \"machine-config-server-n8bvp\" (UID: \"2b7c825f-c092-4d5b-9a1d-be16df92e5a2\") " pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.584816 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.584899 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.585199 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.085180685 +0000 UTC m=+116.956678727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.590639 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"marketplace-operator-547dbd544d-mf247\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.643549 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.658245 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n8bvp" Jan 30 00:12:06 crc kubenswrapper[5103]: W0130 00:12:06.678501 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b7c825f_c092_4d5b_9a1d_be16df92e5a2.slice/crio-69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41 WatchSource:0}: Error finding container 69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41: Status 404 returned error can't find the container with id 69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41 Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.691616 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.692257 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.192225943 +0000 UTC m=+117.063724025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.794609 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.794958 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.294942277 +0000 UTC m=+117.166440329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: W0130 00:12:06.862118 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb15f695a_0fc1_4ab5_aad2_341f3bf6822d.slice/crio-0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45 WatchSource:0}: Error finding container 0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45: Status 404 returned error can't find the container with id 0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45 Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.896486 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.896743 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.396710638 +0000 UTC m=+117.268208700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.897256 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:06 crc kubenswrapper[5103]: E0130 00:12:06.897796 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.397771083 +0000 UTC m=+117.269269175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:06 crc kubenswrapper[5103]: I0130 00:12:06.998522 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:06.998965 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.49894376 +0000 UTC m=+117.370441822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.100258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.100753 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.600732701 +0000 UTC m=+117.472230763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.202498 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.202837 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.702798829 +0000 UTC m=+117.574296921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.306661 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.307304 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.807274455 +0000 UTC m=+117.678772537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.408385 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.408625 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.908594955 +0000 UTC m=+117.780093007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.408722 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.409197 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:07.909181889 +0000 UTC m=+117.780679951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.510096 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.510210 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.010191181 +0000 UTC m=+117.881689223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.510349 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.510629 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.010619092 +0000 UTC m=+117.882117144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.611709 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.612173 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.112114796 +0000 UTC m=+117.983612908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.713796 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.714387 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.214359148 +0000 UTC m=+118.085857330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.814944 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.815150 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.315123274 +0000 UTC m=+118.186621316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.815525 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.815902 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.315882643 +0000 UTC m=+118.187380705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.917475 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.917761 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.417719005 +0000 UTC m=+118.289217097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:07 crc kubenswrapper[5103]: I0130 00:12:07.918279 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:07 crc kubenswrapper[5103]: E0130 00:12:07.918730 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.418708499 +0000 UTC m=+118.290206591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.019229 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.019551 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.519513386 +0000 UTC m=+118.391011478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.019862 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.020461 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.520437649 +0000 UTC m=+118.391935741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.121722 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.121917 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.621879412 +0000 UTC m=+118.493377504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.122163 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.122754 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.622729292 +0000 UTC m=+118.494227384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218510 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2xrjj"] Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218571 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" event={"ID":"4022194a-f5e9-494f-b079-ddd414c3da50","Type":"ContainerStarted","Data":"4a1663ce5228deaa796f1880984d01701e616d37c69f0f1cd59e42004c093c1c"} Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218607 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.218710 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.223447 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.223643 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-j77tr" podStartSLOduration=95.223614041 podStartE2EDuration="1m35.223614041s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.488923648 +0000 UTC m=+116.360421710" watchObservedRunningTime="2026-01-30 00:12:08.223614041 +0000 UTC m=+118.095112143" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.223996 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.72397092 +0000 UTC m=+118.595469012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.224244 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" podStartSLOduration=95.224227426 podStartE2EDuration="1m35.224227426s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.4576584 +0000 UTC m=+116.329156472" watchObservedRunningTime="2026-01-30 00:12:08.224227426 +0000 UTC m=+118.095725528" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.227099 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.229330 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.229661 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327038 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf568a51-0f76-4d77-87d4-136b487786a9-tmp-dir\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327174 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs57g\" (UniqueName: \"kubernetes.io/projected/cf568a51-0f76-4d77-87d4-136b487786a9-kube-api-access-fs57g\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327220 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf568a51-0f76-4d77-87d4-136b487786a9-config-volume\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327247 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf568a51-0f76-4d77-87d4-136b487786a9-metrics-tls\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.327402 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.328028 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.827999896 +0000 UTC m=+118.699497978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.428887 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.429209 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.929169572 +0000 UTC m=+118.800667624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429616 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fs57g\" (UniqueName: \"kubernetes.io/projected/cf568a51-0f76-4d77-87d4-136b487786a9-kube-api-access-fs57g\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429670 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf568a51-0f76-4d77-87d4-136b487786a9-config-volume\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429709 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf568a51-0f76-4d77-87d4-136b487786a9-metrics-tls\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.429925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.430290 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf568a51-0f76-4d77-87d4-136b487786a9-tmp-dir\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.430315 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:08.930303809 +0000 UTC m=+118.801801941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.430951 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cf568a51-0f76-4d77-87d4-136b487786a9-tmp-dir\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.431156 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf568a51-0f76-4d77-87d4-136b487786a9-config-volume\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.437448 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf568a51-0f76-4d77-87d4-136b487786a9-metrics-tls\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.455869 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs57g\" (UniqueName: \"kubernetes.io/projected/cf568a51-0f76-4d77-87d4-136b487786a9-kube-api-access-fs57g\") pod \"dns-default-gdlhx\" (UID: \"cf568a51-0f76-4d77-87d4-136b487786a9\") " pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.531459 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.531754 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.031710941 +0000 UTC m=+118.903209023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.532506 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.533255 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.033233538 +0000 UTC m=+118.904731610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.540214 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.634625 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.634878 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.134831305 +0000 UTC m=+119.006329397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.635179 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.635599 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.135583043 +0000 UTC m=+119.007081095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.736685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.736800 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.23677412 +0000 UTC m=+119.108272172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.737413 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.741359 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.24133265 +0000 UTC m=+119.112830722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.838681 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.838983 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.33894587 +0000 UTC m=+119.210443942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:08 crc kubenswrapper[5103]: I0130 00:12:08.940905 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:08 crc kubenswrapper[5103]: E0130 00:12:08.941457 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.441423698 +0000 UTC m=+119.312921840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.042761 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.042943 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.542919642 +0000 UTC m=+119.414417704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.043305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.043691 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.54368088 +0000 UTC m=+119.415178942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.144991 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.145258 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.645228016 +0000 UTC m=+119.516726078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.145787 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.146247 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.64623031 +0000 UTC m=+119.517728372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.246959 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.247227 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.747185651 +0000 UTC m=+119.618683743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.248882 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.249365 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.749345613 +0000 UTC m=+119.620843675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.297335 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.297671 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.302257 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303218 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303325 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303420 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303343 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.303550 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.304558 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.305463 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.305959 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.307540 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.307631 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310218 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310278 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" event={"ID":"c3bbecd7-5e60-4290-bc24-b4f292d0d515","Type":"ContainerStarted","Data":"f75723f85c118908ad0270b5ef4a061e86c4987c9d6676b7ee5a570cf1358a52"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310322 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" event={"ID":"bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0","Type":"ContainerStarted","Data":"220e9a40b0e50e9056393153e34715e3753415e89a3a1e0a8cb90d8927b042f1"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310375 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerStarted","Data":"14c110c2aafcebf401f14c4e8482618b6d3c8697a12a7383624870029d5a39de"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310401 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerStarted","Data":"fcd20598200cbf757c0c2051caf7ebf16a7451c09f1b9792561f7689e329b0b7"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310426 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" event={"ID":"f3b3db2b-ab99-483b-a13c-4947269bc330","Type":"ContainerStarted","Data":"64fbde2915247d404cac68a838f6050ce8b4c6a918187c78d666a0041688dd1b"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" event={"ID":"fcede4f0-4721-47c1-bc52-b68bf7ad29d4","Type":"ContainerStarted","Data":"7ba2f75bcb6ca4a54baf980dd9b4ccb133b13f4bbb15764f2888c2a87e17a239"} Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.310485 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-qgd5c"] Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.311564 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29495520-x6t57" podStartSLOduration=97.311534403 podStartE2EDuration="1m37.311534403s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:08.672573961 +0000 UTC m=+118.544072023" watchObservedRunningTime="2026-01-30 00:12:09.311534403 +0000 UTC m=+119.183032505" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.311972 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.312247 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.334227 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.340938 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.350369 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.350581 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.85054549 +0000 UTC m=+119.722043562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.351254 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.351654 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.851635917 +0000 UTC m=+119.723133979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452720 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452864 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452901 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452926 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452963 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.452982 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.453140 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:09.953035739 +0000 UTC m=+119.824533851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453366 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453444 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453484 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453508 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453525 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453664 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453780 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453907 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.453959 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555038 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555151 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555204 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555270 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555358 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555491 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555632 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555719 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555890 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.555925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556004 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556087 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556134 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.556176 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.557371 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.057343191 +0000 UTC m=+119.928841283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.657698 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.657979 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.157916623 +0000 UTC m=+120.029414715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.658395 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.658945 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.158920327 +0000 UTC m=+120.030418409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.760488 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.760677 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.260640337 +0000 UTC m=+120.132138449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.761117 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.761566 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.261548689 +0000 UTC m=+120.133046771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.862807 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.863822 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.363796151 +0000 UTC m=+120.235294243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.864823 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.865581 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.867277 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.868115 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.872858 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.873483 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875269 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875474 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875821 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.875981 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.876023 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.876168 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.877098 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-r9ddz\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:09 crc kubenswrapper[5103]: I0130 00:12:09.965705 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:09 crc kubenswrapper[5103]: E0130 00:12:09.966114 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.466099285 +0000 UTC m=+120.337597337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.017094 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.017233 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.023445 5103 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6v8cn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" start-of-body= Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.023535 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" podUID="fcede4f0-4721-47c1-bc52-b68bf7ad29d4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.032208 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.067127 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.067424 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.567370542 +0000 UTC m=+120.438868634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.068291 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.068709 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.568692474 +0000 UTC m=+120.440190546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.169189 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.169392 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.669347778 +0000 UTC m=+120.540845860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.169607 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.170697 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.669987113 +0000 UTC m=+120.541485165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.272420 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.272703 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.772664336 +0000 UTC m=+120.644162408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.272884 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.273232 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.77321889 +0000 UTC m=+120.644716932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.378856 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.379695 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.879675964 +0000 UTC m=+120.751174026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.480692 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.481154 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:10.981139367 +0000 UTC m=+120.852637419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.561852 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" event={"ID":"23500895-f472-4de5-afda-f1cc02807ceb","Type":"ContainerStarted","Data":"d2bb55bbf63772c72d65c62fbddf187a43731548106280f84e16786c76288a58"} Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.561918 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.561969 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqmz"] Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.565029 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" podStartSLOduration=97.564997283 podStartE2EDuration="1m37.564997283s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:10.039096397 +0000 UTC m=+119.910594479" watchObservedRunningTime="2026-01-30 00:12:10.564997283 +0000 UTC m=+120.436495375" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.581948 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.582138 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.082114049 +0000 UTC m=+120.953612101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.582228 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.582590 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.08258331 +0000 UTC m=+120.954081362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683242 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.683503 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.183461489 +0000 UTC m=+121.054959541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683636 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683690 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.683931 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.684181 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.684223 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.684573 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.184560136 +0000 UTC m=+121.056058248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.784986 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.785138 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.285115097 +0000 UTC m=+121.156613149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785892 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785925 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785969 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.785991 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786074 5103 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: object "openshift-ingress"/"service-ca-bundle" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786087 5103 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: object "openshift-ingress"/"router-metrics-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786153 5103 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: object "openshift-ingress"/"router-stats-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786149 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286137642 +0000 UTC m=+121.157635694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"service-ca-bundle" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786441 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286426469 +0000 UTC m=+121.157924581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"router-metrics-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786483 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286453479 +0000 UTC m=+121.157951761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"router-stats-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786513 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286505061 +0000 UTC m=+121.158003253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.786593 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.786625 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786695 5103 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: object "openshift-ingress"/"router-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.786764 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.286737466 +0000 UTC m=+121.158235558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : object "openshift-ingress"/"router-certs-default" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800428 5103 projected.go:289] Couldn't get configMap openshift-ingress/kube-root-ca.crt: object "openshift-ingress"/"kube-root-ca.crt" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800491 5103 projected.go:289] Couldn't get configMap openshift-ingress/openshift-service-ca.crt: object "openshift-ingress"/"openshift-service-ca.crt" not registered Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800505 5103 projected.go:194] Error preparing data for projected volume kube-api-access-bc9kk for pod openshift-ingress/router-default-68cf44c8b8-qgd5c: [object "openshift-ingress"/"kube-root-ca.crt" not registered, object "openshift-ingress"/"openshift-service-ca.crt" not registered] Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.800576 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk podName:2e4d66cc-52c4-40ae-a23a-4aa4831adfb4 nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.300551422 +0000 UTC m=+121.172049574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bc9kk" (UniqueName: "kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk") pod "router-default-68cf44c8b8-qgd5c" (UID: "2e4d66cc-52c4-40ae-a23a-4aa4831adfb4") : [object "openshift-ingress"/"kube-root-ca.crt" not registered, object "openshift-ingress"/"openshift-service-ca.crt" not registered] Jan 30 00:12:10 crc kubenswrapper[5103]: I0130 00:12:10.993528 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:10 crc kubenswrapper[5103]: E0130 00:12:10.995302 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.495260089 +0000 UTC m=+121.366758181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.040798 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.040895 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.096745 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.097147 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.597131222 +0000 UTC m=+121.468629284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.197546 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.197742 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.697715564 +0000 UTC m=+121.569213626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.198089 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.198421 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.69840606 +0000 UTC m=+121.569904112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279842 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" event={"ID":"ca146169-65b5-4eed-be41-43bb8bf87656","Type":"ContainerStarted","Data":"c247a57b7fe7f2aa890d312a8303de8bb0e377c2050e84e99b30f9e6da1d45f3"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279912 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx" event={"ID":"22187967-c3cb-4aec-b6d5-65c7c6167554","Type":"ContainerStarted","Data":"5b63698741944fc197cb263f73a75657a3d81eef13d32ee8cbee603537df5169"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279930 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" event={"ID":"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e","Type":"ContainerStarted","Data":"6249fcb5844b660333c4ac49692eac2cafb185ec4dbbebfcbc2ce3bb1e6f68d6"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279951 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.279967 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b" event={"ID":"8ee6bca0-0d30-4653-b2a4-a79ebde1fed9","Type":"ContainerStarted","Data":"02061e1d5fc241364294c16f2752c64bc77dfc52fe8e426f1bbcf1d06b07d88f"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.280006 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.281620 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.283864 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.285525 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.286962 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.287272 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.287545 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.287922 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.288150 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.288188 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.288654 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289195 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-4rfkh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289237 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" event={"ID":"aa3b2afd-f2d4-40f0-bbd3-19225d26438e","Type":"ContainerStarted","Data":"a10082817156a05dfaffb5e94545c160e23cbf636e1b055cd5f582f13eeccb23"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289260 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" event={"ID":"aa3b2afd-f2d4-40f0-bbd3-19225d26438e","Type":"ContainerStarted","Data":"b5c30c4a11fe11b38adaf1c964255efbb88e8214b00aab112b2963179e2c1b06"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289278 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289296 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289313 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-j77tr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289331 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289345 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289360 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerStarted","Data":"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289379 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289395 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dtdff"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289411 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerDied","Data":"f0078a0ce155b37c23086d472f5f677a2cdb7136a582b7aeb8db53e9394aa660"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289431 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289447 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" event={"ID":"f80439cc-c38d-4210-a203-f478704d9dcd","Type":"ContainerStarted","Data":"1d2e98ef1dc50c4908e70f14a0f924ff984fd6cbe6d6caca5516013a7e12baab"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289465 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289482 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" event={"ID":"a0ff7eb1-7b00-4318-936e-30862acd97e5","Type":"ContainerDied","Data":"794ade07b1fe5623465f764c5eaf8d3c479eeb7e9a2066ff11ca2f40c30e5324"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289500 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" event={"ID":"72531653-f2c6-4754-8209-24104364d6f4","Type":"ContainerStarted","Data":"0b2dea2b01a00baa58570f13ea4d5c67f2bb5bde5b5e20073a04eaba162eb45a"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289516 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" event={"ID":"3ed247cb-77c1-47fb-ad58-f14f03aae2f2","Type":"ContainerStarted","Data":"010a3e79682217ed5f4858425ee9d8e68d2b2f0b6dedd9af218d4cec3798c424"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289529 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" event={"ID":"0ebc9fa5-f75b-4468-b4b8-83695dd067b6","Type":"ContainerStarted","Data":"737380cb5b0629610404ce614a393e5e873d54eb72e2fdaf90fc41af38ef80be"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289542 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289545 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289556 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerStarted","Data":"22cf5ca5b9dc2b7338a29b6c0ecec87eac0aa4aac8490606aa762bcf17a7311c"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289570 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" event={"ID":"8550f022-16a5-4fac-a94e-fc322ee0cb9d","Type":"ContainerStarted","Data":"7436156915c575beccaacbc400badce8bfcf50425c941304b0e657d7e619767b"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289584 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289598 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" event={"ID":"c9dfcfad-0e85-4b3e-9a33-3729f7033251","Type":"ContainerStarted","Data":"4a24827a85cd26b0f0d53622ffa0da5764d3f74ad95b6d2fec9319059ff15c75"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289609 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" event={"ID":"35998b47-ed37-4a50-9553-18147918d9cb","Type":"ContainerStarted","Data":"6d6114ceb68ae67260e01f25a1b5cc7e5611f1aca85649fc5a25919d41ccae4a"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289621 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" event={"ID":"8dc8aa23-eb1a-486e-9462-499486335cdc","Type":"ContainerStarted","Data":"408e260443325c70f9c09694dafcfc66a246ef2b0a79a37358551cb0bc1e8007"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289633 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" event={"ID":"c3bbecd7-5e60-4290-bc24-b4f292d0d515","Type":"ContainerStarted","Data":"660c21923d02c550d66116b3d77994184dda07eefda1e6d7d5b7b4870b84e0f1"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289645 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289660 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289675 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-x6t57"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289690 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" event={"ID":"179763d8-8dea-40e5-ba89-1a848fbf519a","Type":"ContainerStarted","Data":"aa235fa5321f6a87667237367d6a035c2a4259ba213eb0974341d9e1f7e3562c"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289705 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" event={"ID":"f1c445e1-3a33-419a-bd9a-0314b23539f7","Type":"ContainerStarted","Data":"27c628f9ccc49f2259855656ac2f066826629c44e23db39f2aabb1f7ab48dccb"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289722 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289736 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289752 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7v6vx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289763 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" event={"ID":"42f46e1b-e6a2-499c-9e01-fe08785a78a4","Type":"ContainerStarted","Data":"a371cabbeee1abe2e2c2ce5fb9e2ceca15f9e6c746f56f73aa9c6ceab42e9720"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7v6vx" event={"ID":"9bef77c6-141b-4cff-a91d-7515860a6a2a","Type":"ContainerStarted","Data":"d49beb1fb54d8b2a6fe43d988a8cfefa253a3c2b72d058a53b17fe4322292b64"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289785 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dtdff" event={"ID":"02410934-0df2-4e17-9042-91fa47becda6","Type":"ContainerStarted","Data":"5c0cd996ce9c244e51448d155956bda81f09898040572971daf165985965f737"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289799 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5tp7b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289810 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" event={"ID":"ca146169-65b5-4eed-be41-43bb8bf87656","Type":"ContainerStarted","Data":"4acbf56ab49e55320969361efd63d9b6fceec3394fc78dbf3c14fa0df602b17a"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289822 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4ltx6"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289833 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289843 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" event={"ID":"a3a441e4-5ade-4309-938a-0f4fe130a721","Type":"ContainerStarted","Data":"3ff899e4bcbcb1d73af6c5cd292c3cb1cdff3b5962a0453699a9d1ec5f69e662"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289853 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" event={"ID":"4b196a79-ecff-4ec8-8338-33436cfd3dcc","Type":"ContainerStarted","Data":"2236dd21f8e2bee83df37b2fa78eb0cbaf3b44b8ed4703a935e77c81ecdb04a4"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289866 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289878 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.289890 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69ms4"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299136 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299297 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.299335 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.79931215 +0000 UTC m=+121.670810212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299463 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299574 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299720 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299905 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqpf\" (UniqueName: \"kubernetes.io/projected/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-kube-api-access-wrqpf\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.299990 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.300045 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.800033978 +0000 UTC m=+121.671532040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.300100 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.300139 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.304283 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.310742 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-service-ca-bundle\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.316715 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-default-certificate\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.317994 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-stats-auth\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.320214 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-metrics-certs\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402095 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.402211 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.902187478 +0000 UTC m=+121.773685530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402310 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402393 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402470 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrqpf\" (UniqueName: \"kubernetes.io/projected/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-kube-api-access-wrqpf\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402492 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402610 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.402900 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-tmp-dir\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.403363 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:11.903346666 +0000 UTC m=+121.774844788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.409469 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-metrics-tls\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.409514 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc9kk\" (UniqueName: \"kubernetes.io/projected/2e4d66cc-52c4-40ae-a23a-4aa4831adfb4-kube-api-access-bc9kk\") pod \"router-default-68cf44c8b8-qgd5c\" (UID: \"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4\") " pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.436997 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrqpf\" (UniqueName: \"kubernetes.io/projected/69ff5998-10ea-4bf2-85ef-6f3621d2f1c6-kube-api-access-wrqpf\") pod \"dns-operator-799b87ffcd-rgqmz\" (UID: \"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.503900 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.504076 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.00403772 +0000 UTC m=+121.875535782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.504341 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.504753 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.004743177 +0000 UTC m=+121.876241229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.566444 5103 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-7csdm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.566869 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.592943 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" event={"ID":"fcede4f0-4721-47c1-bc52-b68bf7ad29d4","Type":"ContainerStarted","Data":"de5f2e232def9eb29460cb73b3b6a441cefc8bedfec4bbb3082f3590c17d13f5"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593213 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593282 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n8bvp" event={"ID":"2b7c825f-c092-4d5b-9a1d-be16df92e5a2","Type":"ContainerStarted","Data":"69d284c8f14aa836dbaa76d6d8aa2cb36b6d5967b964cb3f470c8d1981e0ca41"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593307 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerStarted","Data":"0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.593325 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2mh7r"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.597631 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.597818 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.597951 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.608967 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.609101 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.10906829 +0000 UTC m=+121.980566342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609739 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-plugins-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609834 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-mountpoint-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609872 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.609998 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-csi-data-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.610017 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-socket-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.610036 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlk8\" (UniqueName: \"kubernetes.io/projected/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-kube-api-access-grlk8\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.610090 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-registration-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.612415 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.112398601 +0000 UTC m=+121.983896653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.617620 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.618566 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.630020 5103 patch_prober.go:28] interesting pod/console-operator-67c89758df-4rfkh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.630104 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podUID="bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.675592 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.676730 5103 patch_prober.go:28] interesting pod/console-operator-67c89758df-4rfkh container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.676797 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podUID="bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.678246 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59228: no serving certificate available for the kubelet" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685140 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685186 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" event={"ID":"f3b3db2b-ab99-483b-a13c-4947269bc330","Type":"ContainerStarted","Data":"25ab8682e26ae83def4771bae81411f562ed9fea06908780b37e0a89075a13b8"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685210 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" event={"ID":"23500895-f472-4de5-afda-f1cc02807ceb","Type":"ContainerStarted","Data":"bd870aed1e11cb9ea1fecd7f733bcfe8e65906b77a196b0d59a7946da4604f87"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685226 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gdlhx" event={"ID":"cf568a51-0f76-4d77-87d4-136b487786a9","Type":"ContainerStarted","Data":"22e1cc1ed66ae3403c9a5ecd0603d2ba86d3c9b62b69ff8de7d68025c41fd882"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685245 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685266 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685279 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gdlhx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685291 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" event={"ID":"b9b4e43e-61bd-46ba-a825-e0bca8c8da4e","Type":"ContainerStarted","Data":"00c56f5736ab1c2203b6302368b034ae3ac0d41bb133d09d39041cb5a15bbfcd"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685309 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerStarted","Data":"e3d46683d3f3d86228a063dcb193d36e8067e6dad542d18de17ac86ad6dc3b86"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685323 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685335 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685346 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685357 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69ms4"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685367 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2mh7r"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685376 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685387 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685397 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685406 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqmz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685417 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685426 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.685442 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.687269 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.692701 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.694946 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.695166 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.695923 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.696962 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6tmbq" podStartSLOduration=98.696948314 podStartE2EDuration="1m38.696948314s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:03.686756199 +0000 UTC m=+113.558254261" watchObservedRunningTime="2026-01-30 00:12:11.696948314 +0000 UTC m=+121.568446426" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.710574 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.710781 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.210765979 +0000 UTC m=+122.082264031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711312 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-cert\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711378 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-csi-data-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711404 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-socket-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711426 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grlk8\" (UniqueName: \"kubernetes.io/projected/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-kube-api-access-grlk8\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711475 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-registration-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711613 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-plugins-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711672 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-mountpoint-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711703 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.711725 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2lz\" (UniqueName: \"kubernetes.io/projected/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-kube-api-access-5m2lz\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713191 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-csi-data-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713424 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-plugins-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713424 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-registration-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713582 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-socket-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.713693 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-mountpoint-dir\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.713946 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.213929366 +0000 UTC m=+122.085427488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.736452 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grlk8\" (UniqueName: \"kubernetes.io/projected/fe0b1692-3dd7-4854-b53d-c32cd8162e1b-kube-api-access-grlk8\") pod \"csi-hostpathplugin-69ms4\" (UID: \"fe0b1692-3dd7-4854-b53d-c32cd8162e1b\") " pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.770747 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-clmhf"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771089 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771105 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-6z46s"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771113 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-2xrjj"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771124 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771133 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-4rfkh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771141 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771156 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29495520-x6t57"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771169 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-j77tr"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771176 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8qhdx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771198 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-jpc9b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771211 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771232 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-4ltx6"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771278 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771289 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771298 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5tp7b"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.770981 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.771532 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.774113 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.776766 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59232: no serving certificate available for the kubelet" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.786661 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.804065 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812422 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812556 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812587 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812658 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.812721 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.312690583 +0000 UTC m=+122.184188625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812801 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812848 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2lz\" (UniqueName: \"kubernetes.io/projected/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-kube-api-access-5m2lz\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.812948 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.813026 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-cert\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.814385 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.314372704 +0000 UTC m=+122.185870746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.815581 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.815623 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" event={"ID":"8dc8aa23-eb1a-486e-9462-499486335cdc","Type":"ContainerStarted","Data":"7104caf1b88f03ec308833b1963b9304d7cb0c06133c827664919aac59c10ed2"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.818586 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.831777 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7v6vx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.834154 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.835428 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-cert\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.855584 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2lz\" (UniqueName: \"kubernetes.io/projected/f2bcf9d4-8f8c-4722-95c8-03ff81e4b300-kube-api-access-5m2lz\") pod \"ingress-canary-2mh7r\" (UID: \"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300\") " pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.858196 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" event={"ID":"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4","Type":"ContainerStarted","Data":"cdae0d11c631ab549663e24b81f7bab5a9fd9beec8657d9a2ba7e1458b493106"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.865435 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.875094 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.883779 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59234: no serving certificate available for the kubelet" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.894109 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.894515 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.897067 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-dtdff"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.897830 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.898463 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.899297 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" event={"ID":"aa3b2afd-f2d4-40f0-bbd3-19225d26438e","Type":"ContainerStarted","Data":"1f238a546c0532137a93386abdf3038e4d5be698d7c8bcb43d71c649c8772903"} Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.899619 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.904911 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.906079 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.915502 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916326 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916399 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916429 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podStartSLOduration=98.916418611 podStartE2EDuration="1m38.916418611s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.436690551 +0000 UTC m=+116.308188623" watchObservedRunningTime="2026-01-30 00:12:11.916418611 +0000 UTC m=+121.787916853" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916556 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916595 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: E0130 00:12:11.916731 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.416715618 +0000 UTC m=+122.288213670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.916945 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.917704 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.918581 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.918798 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-qsf67" podStartSLOduration=99.918790559 podStartE2EDuration="1m39.918790559s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:06.475738779 +0000 UTC m=+116.347236841" watchObservedRunningTime="2026-01-30 00:12:11.918790559 +0000 UTC m=+121.790288611" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.919953 5103 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6v8cn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.919995 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" podUID="fcede4f0-4721-47c1-bc52-b68bf7ad29d4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.14:5443/healthz\": dial tcp 10.217.0.14:5443: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.920974 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.921163 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.923062 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.930242 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.932796 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.942833 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podStartSLOduration=98.942814132 podStartE2EDuration="1m38.942814132s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:08.693915819 +0000 UTC m=+118.565413891" watchObservedRunningTime="2026-01-30 00:12:11.942814132 +0000 UTC m=+121.814312184" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.949729 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7hhqr" podStartSLOduration=98.949710129 podStartE2EDuration="1m38.949710129s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:08.708212356 +0000 UTC m=+118.579710418" watchObservedRunningTime="2026-01-30 00:12:11.949710129 +0000 UTC m=+121.821208181" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.953558 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gdlhx"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.958279 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.965730 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"cni-sysctl-allowlist-ds-cnbd2\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.975005 5103 patch_prober.go:28] interesting pod/console-operator-67c89758df-4rfkh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 00:12:11 crc kubenswrapper[5103]: I0130 00:12:11.975203 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" podUID="bfb3c35d-63fc-4a35-91ea-ef0e217fc5d0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:11.999637 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59236: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.020464 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.025321 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.525299454 +0000 UTC m=+122.396797696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.044294 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2mh7r" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.093310 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59252: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.126257 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.126910 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.626888261 +0000 UTC m=+122.498386313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.128145 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-v2hgb" podStartSLOduration=99.128120681 podStartE2EDuration="1m39.128120681s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.637159982 +0000 UTC m=+121.508658064" watchObservedRunningTime="2026-01-30 00:12:12.128120681 +0000 UTC m=+121.999618733" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.133925 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.144360 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jsk5t" podStartSLOduration=99.144332704 podStartE2EDuration="1m39.144332704s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.842429825 +0000 UTC m=+121.713927877" watchObservedRunningTime="2026-01-30 00:12:12.144332704 +0000 UTC m=+122.015830756" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.158311 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zr8xm" podStartSLOduration=99.158254892 podStartE2EDuration="1m39.158254892s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.934347366 +0000 UTC m=+121.805845428" watchObservedRunningTime="2026-01-30 00:12:12.158254892 +0000 UTC m=+122.029752944" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.160924 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-dxgsc" podStartSLOduration=99.160912767 podStartE2EDuration="1m39.160912767s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:11.972344339 +0000 UTC m=+121.843842391" watchObservedRunningTime="2026-01-30 00:12:12.160912767 +0000 UTC m=+122.032410819" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.171062 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-rgqmz"] Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.183035 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59268: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.225868 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59272: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.227898 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.228369 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.728356154 +0000 UTC m=+122.599854206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.329347 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.329617 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.829578142 +0000 UTC m=+122.701076194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.330234 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.330708 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.830691329 +0000 UTC m=+122.702189381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.349432 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69ms4"] Jan 30 00:12:12 crc kubenswrapper[5103]: W0130 00:12:12.380713 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe0b1692_3dd7_4854_b53d_c32cd8162e1b.slice/crio-52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1 WatchSource:0}: Error finding container 52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1: Status 404 returned error can't find the container with id 52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1 Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.386588 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59288: no serving certificate available for the kubelet" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.410890 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2mh7r"] Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.435143 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.435258 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.935208176 +0000 UTC m=+122.806706228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.435997 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.436731 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:12.936719843 +0000 UTC m=+122.808217895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.537356 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.537537 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.03750963 +0000 UTC m=+122.909007692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.538285 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.538840 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.038830952 +0000 UTC m=+122.910329004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.639089 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.639313 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.13927821 +0000 UTC m=+123.010776262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.640378 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.640836 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.140825738 +0000 UTC m=+123.012323790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.741398 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.741586 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.241556783 +0000 UTC m=+123.113054845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.742229 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.742568 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.242553128 +0000 UTC m=+123.114051170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.775991 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.776101 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.843758 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.844008 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.34397062 +0000 UTC m=+123.215468682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.844476 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.844914 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.344895642 +0000 UTC m=+123.216393694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.907233 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" event={"ID":"f1c445e1-3a33-419a-bd9a-0314b23539f7","Type":"ContainerStarted","Data":"a6790b6b84b98ac0715e8ae1ea57b4ff27489c9d8bc09d2a4c34faaa2d387839"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.909014 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" event={"ID":"72531653-f2c6-4754-8209-24104364d6f4","Type":"ContainerStarted","Data":"755b769872c4c1c8e1133203e05165f9afa0d6264a8a7a7b26a990d175725976"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.910573 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" event={"ID":"3ed247cb-77c1-47fb-ad58-f14f03aae2f2","Type":"ContainerStarted","Data":"5618a8b65c664dc70f879be6c163a7f5280150d589d52cd4507f3676ec01a1f2"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.919784 5103 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-7csdm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded" start-of-body= Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.919851 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded" Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.925425 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" event={"ID":"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6","Type":"ContainerStarted","Data":"e92342672bb0b68b320b38b09be1530158a324962492b78f59e6e5cfc7c62ed0"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.927397 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" event={"ID":"91703ab7-2f05-4831-8200-85210adf830b","Type":"ContainerStarted","Data":"c6b51b19b3c7356936c3b9cae768bc16d2eb83e3dd1e5a42b4880e28b2d04278"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.928569 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" event={"ID":"8550f022-16a5-4fac-a94e-fc322ee0cb9d","Type":"ContainerStarted","Data":"77c121a946385b331f6aa376bc2d4849ed3f9628c7c02c301ca3ebdbf4d821b3"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.929879 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2mh7r" event={"ID":"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300","Type":"ContainerStarted","Data":"01b43b80f0bd1c2e8e1b9fdca959751cbec342405793453756f645cf5c5c6360"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.935689 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" event={"ID":"35998b47-ed37-4a50-9553-18147918d9cb","Type":"ContainerStarted","Data":"7eed29d2d7d4583b9f952b68e6b57b89a754060c555d1c2c10eed5681fb2fe94"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.936864 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7v6vx" event={"ID":"9bef77c6-141b-4cff-a91d-7515860a6a2a","Type":"ContainerStarted","Data":"9aaae1a0beeed6aaacfd1b9d0998714ed50dadbd366711ecbf6866a2a127e075"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.937643 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"52a3d185ff100104821754eae9326aff61b04cdee822a2e7764f463d2e3f16d1"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.945446 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" event={"ID":"e9100695-b78d-4b2f-9cea-9d022064c792","Type":"ContainerStarted","Data":"618c988c78931a81603820eb3e891184d2a3644eb79244247e09c2d0c408abce"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.945800 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:12 crc kubenswrapper[5103]: E0130 00:12:12.946376 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.446348115 +0000 UTC m=+123.317846187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.949354 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-dtdff" event={"ID":"02410934-0df2-4e17-9042-91fa47becda6","Type":"ContainerStarted","Data":"4e9ce9a68542016b6f88a9291dacc4306623e88f04c8d4073cc32aca27ce9149"} Jan 30 00:12:12 crc kubenswrapper[5103]: I0130 00:12:12.952973 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerStarted","Data":"973863cd6d6133ec3ff6a7fd2a13f58a8dd52f466be2fd39e8f85026734e7547"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.047828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.048262 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.548243139 +0000 UTC m=+123.419741181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.071423 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59302: no serving certificate available for the kubelet" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.149596 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.149795 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.649762063 +0000 UTC m=+123.521260115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.150177 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.150588 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.650572303 +0000 UTC m=+123.522070355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.251528 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.251821 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.75177129 +0000 UTC m=+123.623269342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.252148 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.252665 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.752637601 +0000 UTC m=+123.624135843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.301570 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.301629 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.302169 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.311484 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.362190 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.362294 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.862268653 +0000 UTC m=+123.733766705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.366622 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.368376 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.868360571 +0000 UTC m=+123.739858713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.372319 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" podStartSLOduration=101.372306946 podStartE2EDuration="1m41.372306946s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:13.325177992 +0000 UTC m=+123.196676054" watchObservedRunningTime="2026-01-30 00:12:13.372306946 +0000 UTC m=+123.243804988" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.467937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.468205 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.968176964 +0000 UTC m=+123.839675016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.468425 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.468955 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:13.968947923 +0000 UTC m=+123.840445975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.573399 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.574925 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.074904745 +0000 UTC m=+123.946402807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.676072 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.676579 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.176562532 +0000 UTC m=+124.048060584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.777194 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.777470 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.277455211 +0000 UTC m=+124.148953263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.882191 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.882495 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.382482491 +0000 UTC m=+124.253980543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.900116 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-4rfkh" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.925189 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" podStartSLOduration=100.925162457 podStartE2EDuration="1m40.925162457s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:13.43960331 +0000 UTC m=+123.311101372" watchObservedRunningTime="2026-01-30 00:12:13.925162457 +0000 UTC m=+123.796660509" Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.970304 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" event={"ID":"179763d8-8dea-40e5-ba89-1a848fbf519a","Type":"ContainerStarted","Data":"8dbe398184b3186e777d5d6b0a4e6d06823869a17c1e99e54f23987fc377abdf"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.974600 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" event={"ID":"a3a441e4-5ade-4309-938a-0f4fe130a721","Type":"ContainerStarted","Data":"6b4cfcefa38b9fd4bc838a28a7dc091b8b621768de6f0542beef2898a52448ae"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.981041 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerStarted","Data":"9279ef7847ed52983018c21d41a7442c838f79e9c0933fbc64d3162dda65f4ed"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.982450 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" event={"ID":"4b196a79-ecff-4ec8-8338-33436cfd3dcc","Type":"ContainerStarted","Data":"62a838e8656494d098d13d74cc52b6fc0c79efb8bb8f5baac5cd69207bbc9cd2"} Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.983008 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.983185 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.483146545 +0000 UTC m=+124.354644647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.983852 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:13 crc kubenswrapper[5103]: E0130 00:12:13.984293 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.484283513 +0000 UTC m=+124.355781565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:13 crc kubenswrapper[5103]: I0130 00:12:13.988344 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" event={"ID":"42f46e1b-e6a2-499c-9e01-fe08785a78a4","Type":"ContainerStarted","Data":"9af11272942ca42d39709cfecca3dc78ece9ca80c014660fcc4006c573c808cb"} Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.000309 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" event={"ID":"c9dfcfad-0e85-4b3e-9a33-3729f7033251","Type":"ContainerStarted","Data":"0088d22741463fcba1136fa3da0b16a34b1e45195cc54544456bfe1bacf22409"} Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.014254 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.014301 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018245 5103 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-2xrjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018398 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" podUID="91703ab7-2f05-4831-8200-85210adf830b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018245 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018851 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.018303 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.019076 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.057100 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-dtdff" podStartSLOduration=101.05707833 podStartE2EDuration="1m41.05707833s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.048532462 +0000 UTC m=+123.920030524" watchObservedRunningTime="2026-01-30 00:12:14.05707833 +0000 UTC m=+123.928576382" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.071330 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" podStartSLOduration=101.071306915 podStartE2EDuration="1m41.071306915s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.071005308 +0000 UTC m=+123.942503370" watchObservedRunningTime="2026-01-30 00:12:14.071306915 +0000 UTC m=+123.942804967" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.088082 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.088382 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.588352009 +0000 UTC m=+124.459850061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.102098 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podStartSLOduration=101.102080912 podStartE2EDuration="1m41.102080912s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.099226063 +0000 UTC m=+123.970724135" watchObservedRunningTime="2026-01-30 00:12:14.102080912 +0000 UTC m=+123.973578964" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.171699 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podStartSLOduration=101.171678232 podStartE2EDuration="1m41.171678232s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.136064217 +0000 UTC m=+124.007562269" watchObservedRunningTime="2026-01-30 00:12:14.171678232 +0000 UTC m=+124.043176284" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.174729 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-dg5nm" podStartSLOduration=101.174714716 podStartE2EDuration="1m41.174714716s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.172188444 +0000 UTC m=+124.043686506" watchObservedRunningTime="2026-01-30 00:12:14.174714716 +0000 UTC m=+124.046212768" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.179220 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6v8cn" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.193842 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.194487 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.694470215 +0000 UTC m=+124.565968267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.195873 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-7v6vx" podStartSLOduration=101.195857339 podStartE2EDuration="1m41.195857339s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.195463329 +0000 UTC m=+124.066961391" watchObservedRunningTime="2026-01-30 00:12:14.195857339 +0000 UTC m=+124.067355391" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.241830 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-94r9t" podStartSLOduration=101.241795694 podStartE2EDuration="1m41.241795694s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.223271694 +0000 UTC m=+124.094769747" watchObservedRunningTime="2026-01-30 00:12:14.241795694 +0000 UTC m=+124.113293746" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.274400 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fqwng" podStartSLOduration=101.274364725 podStartE2EDuration="1m41.274364725s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.265771726 +0000 UTC m=+124.137269778" watchObservedRunningTime="2026-01-30 00:12:14.274364725 +0000 UTC m=+124.145862777" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.274734 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-rkb6j" podStartSLOduration=101.274729344 podStartE2EDuration="1m41.274729344s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:14.246704803 +0000 UTC m=+124.118202875" watchObservedRunningTime="2026-01-30 00:12:14.274729344 +0000 UTC m=+124.146227396" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.297783 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.298758 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.798741116 +0000 UTC m=+124.670239168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.400794 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.401270 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:14.901251055 +0000 UTC m=+124.772749107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.411992 5103 ???:1] "http: TLS handshake error from 192.168.126.11:51350: no serving certificate available for the kubelet" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.478253 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.478350 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.503578 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.503748 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.003721803 +0000 UTC m=+124.875219855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.504412 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.505081 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.005036785 +0000 UTC m=+124.876535057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.606815 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.607128 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.107084332 +0000 UTC m=+124.978582384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.607807 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.608269 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.108260861 +0000 UTC m=+124.979758913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.709809 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.710043 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.210002621 +0000 UTC m=+125.081500803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.710639 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.711132 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.211110188 +0000 UTC m=+125.082608440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.718857 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.718927 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.812253 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.812479 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.312443378 +0000 UTC m=+125.183941440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.812564 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.813388 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.31337716 +0000 UTC m=+125.184875232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.913915 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.914162 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.414125296 +0000 UTC m=+125.285623348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:14 crc kubenswrapper[5103]: I0130 00:12:14.914600 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:14 crc kubenswrapper[5103]: E0130 00:12:14.915025 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.415006628 +0000 UTC m=+125.286504680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.007757 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerStarted","Data":"9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.009625 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gdlhx" event={"ID":"cf568a51-0f76-4d77-87d4-136b487786a9","Type":"ContainerStarted","Data":"fe1522e7fd6b0dc59e87655bc9973e6cc8f2b63e0ef1b899ac92c61aa6c3e586"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.011074 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" event={"ID":"0ebc9fa5-f75b-4468-b4b8-83695dd067b6","Type":"ContainerStarted","Data":"363aa2140ea2ee68e8f8de3bbe0adcb234b6544f48975b6c100141948f6105fe"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.012831 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n8bvp" event={"ID":"2b7c825f-c092-4d5b-9a1d-be16df92e5a2","Type":"ContainerStarted","Data":"957df307ad25860ce5b36830bfbac9760d8f69ed0d321bf1012ad558ae18cce1"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.014680 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" event={"ID":"c3bbecd7-5e60-4290-bc24-b4f292d0d515","Type":"ContainerStarted","Data":"d171d9b792326634b76b70cb6545e3ba503abff9493d3c9455ccf9759920c60c"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.015501 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.015976 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.515947708 +0000 UTC m=+125.387445760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.016305 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.016452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" event={"ID":"f3b3db2b-ab99-483b-a13c-4947269bc330","Type":"ContainerStarted","Data":"d649bb0db8fba7081a8b8f035d7fc4386fc27011dc5e9a81a9833393d61535cd"} Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.016996 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.516974523 +0000 UTC m=+125.388472575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.018965 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" event={"ID":"ca146169-65b5-4eed-be41-43bb8bf87656","Type":"ContainerStarted","Data":"c3e8f0be8e6159ecb2ddd32ede292413a575d7d9482c4a2b5e7ec1b275b6f48b"} Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.117356 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.118321 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.618285683 +0000 UTC m=+125.489783735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.136591 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-6mbbh" podStartSLOduration=102.136567347 podStartE2EDuration="1m42.136567347s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:15.133444521 +0000 UTC m=+125.004942583" watchObservedRunningTime="2026-01-30 00:12:15.136567347 +0000 UTC m=+125.008065399" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191198 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191262 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191721 5103 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-2xrjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191768 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" podUID="91703ab7-2f05-4831-8200-85210adf830b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191802 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.191825 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.219905 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.221214 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.721196711 +0000 UTC m=+125.592694833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.320968 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.321311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.821263471 +0000 UTC m=+125.692761523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.322098 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.322374 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.822361147 +0000 UTC m=+125.693859199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.423351 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.423674 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:15.923658486 +0000 UTC m=+125.795156538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.525782 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.526662 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.026648917 +0000 UTC m=+125.898146969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.627610 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.627803 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.127773132 +0000 UTC m=+125.999271184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.628414 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.628692 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.128683614 +0000 UTC m=+126.000181666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.730204 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.730639 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.230622559 +0000 UTC m=+126.102120611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.832258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.832801 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.332763378 +0000 UTC m=+126.204261430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:15 crc kubenswrapper[5103]: I0130 00:12:15.933284 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:15 crc kubenswrapper[5103]: E0130 00:12:15.933521 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.433482644 +0000 UTC m=+126.304980706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.026609 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" event={"ID":"a0ff7eb1-7b00-4318-936e-30862acd97e5","Type":"ContainerStarted","Data":"9dca98504341e27a03a8ff78c028fadcfc5dd5b0f249cbaf9c99b9d858eb8d3e"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.028235 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" event={"ID":"2e4d66cc-52c4-40ae-a23a-4aa4831adfb4","Type":"ContainerStarted","Data":"39ec93c3402e2db20ced5a9fca981b0a1a1ef4aa1f327beec0e10ddfc1ade594"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.029871 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerStarted","Data":"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.031718 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" event={"ID":"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6","Type":"ContainerStarted","Data":"0a508e2b47be343d36f749765f082787a3574c8316fa9a174514a6459bd0c5ad"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.033971 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" event={"ID":"8550f022-16a5-4fac-a94e-fc322ee0cb9d","Type":"ContainerStarted","Data":"9b85284bd66712759742d88c8fb923f2f77677b1a100552dff1ca877e0834c77"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.034958 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.035454 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.535436769 +0000 UTC m=+126.406934821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.038541 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.040581 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.042585 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerStarted","Data":"6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9"} Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.058183 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.059426 5103 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mf247 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.059505 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.098559 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-4ltx6" podStartSLOduration=103.098539381 podStartE2EDuration="1m43.098539381s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.096893231 +0000 UTC m=+125.968391283" watchObservedRunningTime="2026-01-30 00:12:16.098539381 +0000 UTC m=+125.970037433" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.099348 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-w4q8t" podStartSLOduration=103.09934077 podStartE2EDuration="1m43.09934077s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:15.496481764 +0000 UTC m=+125.367979826" watchObservedRunningTime="2026-01-30 00:12:16.09934077 +0000 UTC m=+125.970838822" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.136296 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.136483 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.636448441 +0000 UTC m=+126.507946493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.150112 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podStartSLOduration=103.150097132 podStartE2EDuration="1m43.150097132s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.148528894 +0000 UTC m=+126.020026946" watchObservedRunningTime="2026-01-30 00:12:16.150097132 +0000 UTC m=+126.021595184" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.175520 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-5tp7b" podStartSLOduration=103.175491249 podStartE2EDuration="1m43.175491249s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.166225484 +0000 UTC m=+126.037723536" watchObservedRunningTime="2026-01-30 00:12:16.175491249 +0000 UTC m=+126.046989301" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.237771 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.238171 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.7381583 +0000 UTC m=+126.609656352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.327405 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.327467 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.338826 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.339077 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.839028139 +0000 UTC m=+126.710526191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.339626 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.340027 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.840011643 +0000 UTC m=+126.711509695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.416259 5103 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-cw4vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.416459 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" podUID="42f46e1b-e6a2-499c-9e01-fe08785a78a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.416956 5103 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kg2rz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.417128 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" podUID="a3a441e4-5ade-4309-938a-0f4fe130a721" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.438542 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podStartSLOduration=103.438525065 podStartE2EDuration="1m43.438525065s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.436161917 +0000 UTC m=+126.307659959" watchObservedRunningTime="2026-01-30 00:12:16.438525065 +0000 UTC m=+126.310023117" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.440863 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.441380 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:16.941363914 +0000 UTC m=+126.812861956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.491104 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" podStartSLOduration=103.491085991 podStartE2EDuration="1m43.491085991s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.470290786 +0000 UTC m=+126.341788828" watchObservedRunningTime="2026-01-30 00:12:16.491085991 +0000 UTC m=+126.362584043" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.491589 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.491909 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.493423 5103 patch_prober.go:28] interesting pod/apiserver-8596bd845d-6z46s container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.493510 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" podUID="a0ff7eb1-7b00-4318-936e-30862acd97e5" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.520946 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podStartSLOduration=103.520926815 podStartE2EDuration="1m43.520926815s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.518899676 +0000 UTC m=+126.390397738" watchObservedRunningTime="2026-01-30 00:12:16.520926815 +0000 UTC m=+126.392424867" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.521517 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-n8bvp" podStartSLOduration=16.521509799 podStartE2EDuration="16.521509799s" podCreationTimestamp="2026-01-30 00:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.491680875 +0000 UTC m=+126.363178927" watchObservedRunningTime="2026-01-30 00:12:16.521509799 +0000 UTC m=+126.393007851" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.547769 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.552495 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.052474341 +0000 UTC m=+126.923972473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.603348 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=50.603325336 podStartE2EDuration="50.603325336s" podCreationTimestamp="2026-01-30 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.601745147 +0000 UTC m=+126.473243189" watchObservedRunningTime="2026-01-30 00:12:16.603325336 +0000 UTC m=+126.474823388" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.604133 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" podStartSLOduration=103.604127745 podStartE2EDuration="1m43.604127745s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:16.547165862 +0000 UTC m=+126.418663924" watchObservedRunningTime="2026-01-30 00:12:16.604127745 +0000 UTC m=+126.475625797" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.619784 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.627258 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.627330 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.647205 5103 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mf247 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.647280 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.649039 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.649220 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.149202899 +0000 UTC m=+127.020700951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.649574 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.649904 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.149894096 +0000 UTC m=+127.021392148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.751476 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.751794 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.25177611 +0000 UTC m=+127.123274162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.853615 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.854256 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.354222737 +0000 UTC m=+127.225720979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.955181 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.955414 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.455378553 +0000 UTC m=+127.326876605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:16 crc kubenswrapper[5103]: I0130 00:12:16.955874 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:16 crc kubenswrapper[5103]: E0130 00:12:16.956261 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.456253894 +0000 UTC m=+127.327751946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.000088 5103 ???:1] "http: TLS handshake error from 192.168.126.11:51352: no serving certificate available for the kubelet" Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.037308 5103 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-clmhf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]log ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]etcd ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/max-in-flight-filter ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 00:12:17 crc kubenswrapper[5103]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 00:12:17 crc kubenswrapper[5103]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 00:12:17 crc kubenswrapper[5103]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 00:12:17 crc kubenswrapper[5103]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:12:17 crc kubenswrapper[5103]: livez check failed Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.037400 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" podUID="e9100695-b78d-4b2f-9cea-9d022064c792" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.057159 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.057454 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.55741636 +0000 UTC m=+127.428914412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.058149 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.058546 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.558536687 +0000 UTC m=+127.430034739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.063415 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" event={"ID":"3ed247cb-77c1-47fb-ad58-f14f03aae2f2","Type":"ContainerStarted","Data":"52c1f9bdb16593064d2f5f5160ae38d23ba015d82d9b7a4cdc6d4b5e499bad67"} Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.064603 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2mh7r" event={"ID":"f2bcf9d4-8f8c-4722-95c8-03ff81e4b300","Type":"ContainerStarted","Data":"d1cf169304733ed9b279f4a210f8873f7a356dd7c4623f31e3fc4d1075634789"} Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.065936 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" event={"ID":"179763d8-8dea-40e5-ba89-1a848fbf519a","Type":"ContainerStarted","Data":"2534e9dd2dd4610ae33d4fb0f2d80f272f56ca1fbc159972eef0d6eb6f76663b"} Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.160021 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.160233 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.660202245 +0000 UTC m=+127.531700297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.160340 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.160880 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.660860731 +0000 UTC m=+127.532358783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.261569 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.261758 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.761733299 +0000 UTC m=+127.633231351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.262146 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.262523 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.762513748 +0000 UTC m=+127.634011800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.363097 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.363314 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.863281474 +0000 UTC m=+127.734779526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.363714 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.364112 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.864097464 +0000 UTC m=+127.735595616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.426925 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.465145 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.465509 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.965477835 +0000 UTC m=+127.836975927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.465751 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.466405 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:17.966390118 +0000 UTC m=+127.837888200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.567172 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.567380 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.067349189 +0000 UTC m=+127.938847241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.567645 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.568038 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.068020805 +0000 UTC m=+127.939518867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.620409 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.620476 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.668928 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.670737 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.170710258 +0000 UTC m=+128.042208390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.771090 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.771513 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.271493895 +0000 UTC m=+128.142991947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.871899 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.872155 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.372118648 +0000 UTC m=+128.243616710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.872856 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.873284 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.373274866 +0000 UTC m=+128.244772918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.974408 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.974625 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.474594865 +0000 UTC m=+128.346092918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:17 crc kubenswrapper[5103]: I0130 00:12:17.975011 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:17 crc kubenswrapper[5103]: E0130 00:12:17.975364 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.475357804 +0000 UTC m=+128.346855856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.075686 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.076017 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.576002407 +0000 UTC m=+128.447500459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.177498 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.177877 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.67786468 +0000 UTC m=+128.549362732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.278724 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.278902 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.778870682 +0000 UTC m=+128.650368734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.280247 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.280606 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.780598754 +0000 UTC m=+128.652096806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.346228 5103 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-mf247 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.346623 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.352595 5103 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-r9ddz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" start-of-body= Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.352669 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.383114 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.383720 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.883702417 +0000 UTC m=+128.755200470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.441721 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podStartSLOduration=17.441702326 podStartE2EDuration="17.441702326s" podCreationTimestamp="2026-01-30 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.414082895 +0000 UTC m=+128.285580957" watchObservedRunningTime="2026-01-30 00:12:18.441702326 +0000 UTC m=+128.313200378" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.442774 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2mh7r" podStartSLOduration=17.442767221 podStartE2EDuration="17.442767221s" podCreationTimestamp="2026-01-30 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.442197318 +0000 UTC m=+128.313695370" watchObservedRunningTime="2026-01-30 00:12:18.442767221 +0000 UTC m=+128.314265273" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.487224 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.490784 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:18.990766747 +0000 UTC m=+128.862264799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499483 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-2xrjj" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499525 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499541 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499557 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.499599 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.500097 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.500381 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.507256 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.507453 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.566393 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" podStartSLOduration=105.566370322 podStartE2EDuration="1m45.566370322s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:18.560554491 +0000 UTC m=+128.432052553" watchObservedRunningTime="2026-01-30 00:12:18.566370322 +0000 UTC m=+128.437868374" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.588817 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.589175 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.589345 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.589497 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.089477613 +0000 UTC m=+128.960975665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.622032 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.622127 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690350 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690409 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690457 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.690579 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.690812 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.190799343 +0000 UTC m=+129.062297395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.726094 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.791699 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.792132 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.292112233 +0000 UTC m=+129.163610285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.828901 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.893237 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.893594 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.393578636 +0000 UTC m=+129.265076698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:18 crc kubenswrapper[5103]: I0130 00:12:18.996216 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:18 crc kubenswrapper[5103]: E0130 00:12:18.996993 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.496974246 +0000 UTC m=+129.368472298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.055433 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 30 00:12:19 crc kubenswrapper[5103]: W0130 00:12:19.063533 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84904a2b_f796_4f03_be5b_c5e18c1806fe.slice/crio-d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2 WatchSource:0}: Error finding container d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2: Status 404 returned error can't find the container with id d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2 Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.075634 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerStarted","Data":"d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2"} Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.077652 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gdlhx" event={"ID":"cf568a51-0f76-4d77-87d4-136b487786a9","Type":"ContainerStarted","Data":"ff81cdbda237a8e61eb1caba96c0f18f89b9fc8b809a3c054d9433b8fff3fda5"} Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.079665 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" event={"ID":"4b196a79-ecff-4ec8-8338-33436cfd3dcc","Type":"ContainerStarted","Data":"881e23f51e1bcf760414e8b5848ebe10a98e30b208af4036e32809d20558764d"} Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.098768 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.099211 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.599191578 +0000 UTC m=+129.470689630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.171248 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.199790 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.200018 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.699988655 +0000 UTC m=+129.571486707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.200142 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.200709 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.700689922 +0000 UTC m=+129.572187984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.301278 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.301428 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.801397997 +0000 UTC m=+129.672896049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.301830 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.302187 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.802179706 +0000 UTC m=+129.673677758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.403454 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.403680 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.903648539 +0000 UTC m=+129.775146591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.404244 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.404605 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:19.904597012 +0000 UTC m=+129.776095064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.505680 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.505893 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.00585692 +0000 UTC m=+129.877354972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.506432 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.506824 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.006810444 +0000 UTC m=+129.878308496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.532318 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.548994 5103 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-r9ddz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" start-of-body= Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.549094 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.43:6443/healthz\": dial tcp 10.217.0.43:6443: connect: connection refused" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.574822 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-nc9m9" podStartSLOduration=106.574807244 podStartE2EDuration="1m46.574807244s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:19.573113163 +0000 UTC m=+129.444611225" watchObservedRunningTime="2026-01-30 00:12:19.574807244 +0000 UTC m=+129.446305296" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.608200 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.608752 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.108718868 +0000 UTC m=+129.980216920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.622364 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.622442 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.674391 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.674685 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.677275 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.677488 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.689560 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-6w9kf" podStartSLOduration=107.68954327 podStartE2EDuration="1m47.68954327s" podCreationTimestamp="2026-01-30 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:19.600395106 +0000 UTC m=+129.471893158" watchObservedRunningTime="2026-01-30 00:12:19.68954327 +0000 UTC m=+129.561041322" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.712588 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.715371 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.215343746 +0000 UTC m=+130.086841798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.817160 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.818068 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.818104 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.818260 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.318242774 +0000 UTC m=+130.189740826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920248 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920316 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920480 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.920810 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:19 crc kubenswrapper[5103]: E0130 00:12:19.921196 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.421177433 +0000 UTC m=+130.292675485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.947957 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.973847 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:12:19 crc kubenswrapper[5103]: I0130 00:12:19.996901 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.021903 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.022436 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.522420001 +0000 UTC m=+130.393918053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.090339 5103 generic.go:358] "Generic (PLEG): container finished" podID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerID="9279ef7847ed52983018c21d41a7442c838f79e9c0933fbc64d3162dda65f4ed" exitCode=0 Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.125275 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.125688 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.625671488 +0000 UTC m=+130.497169540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.227185 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.227324 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.727297095 +0000 UTC m=+130.598795147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.227916 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.228247 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.728238148 +0000 UTC m=+130.599736200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.329237 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.329504 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.829473646 +0000 UTC m=+130.700971698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.329975 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.330353 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.830340517 +0000 UTC m=+130.701838569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.431295 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.431477 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.931446241 +0000 UTC m=+130.802944293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.431763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.432123 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:20.932113827 +0000 UTC m=+130.803611879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.533685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.533803 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.033777996 +0000 UTC m=+130.905276048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.534165 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.534699 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.034681147 +0000 UTC m=+130.906179209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566641 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566706 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" event={"ID":"69ff5998-10ea-4bf2-85ef-6f3621d2f1c6","Type":"ContainerStarted","Data":"40767d4f61f3f00b189ba8d8331595a1e056ec284136d6b7f5ac8f9ed3c8f3eb"} Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566753 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.566813 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.572578 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.635301 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.635479 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.135445454 +0000 UTC m=+131.006943506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.635634 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.636157 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.136150251 +0000 UTC m=+131.007648303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.664183 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:20 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:20 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:20 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.664254 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.736924 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.737129 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.237100062 +0000 UTC m=+131.108598114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.737671 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.737862 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.737937 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.738069 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.738221 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.238214189 +0000 UTC m=+131.109712241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.839592 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.839765 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.339741822 +0000 UTC m=+131.211239874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.839992 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840128 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840180 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.840477 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.34046766 +0000 UTC m=+131.211965712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840525 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840704 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.840937 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.863958 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"certified-operators-7c7gb\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.883069 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.927479 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" gracePeriod=30 Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.944240 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.944371 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.444355102 +0000 UTC m=+131.315853154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:20 crc kubenswrapper[5103]: I0130 00:12:20.944693 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:20 crc kubenswrapper[5103]: E0130 00:12:20.944960 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.444953667 +0000 UTC m=+131.316451719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.029970 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gdlhx" podStartSLOduration=21.02994748 podStartE2EDuration="21.02994748s" podCreationTimestamp="2026-01-30 00:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:20.983374779 +0000 UTC m=+130.854872851" watchObservedRunningTime="2026-01-30 00:12:21.02994748 +0000 UTC m=+130.901445532" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.031077 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-rgqmz" podStartSLOduration=108.031069827 podStartE2EDuration="1m48.031069827s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:21.028623548 +0000 UTC m=+130.900121620" watchObservedRunningTime="2026-01-30 00:12:21.031069827 +0000 UTC m=+130.902567879" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.046212 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.046384 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.546326308 +0000 UTC m=+131.417824360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.046872 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.047493 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.547472096 +0000 UTC m=+131.418970148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.113093 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.116896 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.119890 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139193 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139509 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerDied","Data":"9279ef7847ed52983018c21d41a7442c838f79e9c0933fbc64d3162dda65f4ed"} Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139536 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerStarted","Data":"1f97cdac6963b6b6cb50799044e9ac18b855d2c1635c1f04b350e48382eb7d0f"} Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.139548 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.149384 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.149854 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.64982686 +0000 UTC m=+131.521324912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.252509 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.252777 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.252860 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.253103 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.253686 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.753669991 +0000 UTC m=+131.625168043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: W0130 00:12:21.323417 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb7f7db_c773_49f6_b58b_6bd929f25f3a.slice/crio-b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba WatchSource:0}: Error finding container b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba: Status 404 returned error can't find the container with id b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354127 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.354332 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.854305625 +0000 UTC m=+131.725803677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354704 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354900 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.354971 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.355010 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.355379 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.855370021 +0000 UTC m=+131.726868073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.356169 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.356308 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.387228 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"community-operators-nbjkv\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.449970 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.455958 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.456130 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.956101226 +0000 UTC m=+131.827599278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.456336 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.456681 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:21.95666659 +0000 UTC m=+131.828164642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468377 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468425 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerStarted","Data":"24349bed06372dbea664953971f2bfbebc29c4bd99a219453ea1bb72d5709b02"} Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468450 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468614 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.468676 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.494605 5103 patch_prober.go:28] interesting pod/apiserver-8596bd845d-6z46s container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.494668 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" podUID="a0ff7eb1-7b00-4318-936e-30862acd97e5" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.557842 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558117 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558436 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.558568 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.058447571 +0000 UTC m=+131.929945633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558775 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.558983 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.559500 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.059489736 +0000 UTC m=+131.930987888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.627587 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:21 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:21 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:21 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.628177 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.660747 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661263 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661357 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661398 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.661586 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.161553754 +0000 UTC m=+132.033051806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.661799 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.662288 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.707609 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"certified-operators-vzx54\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.763002 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.763372 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.263360245 +0000 UTC m=+132.134858297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.815375 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854037 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854124 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854182 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854211 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.854359 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.865786 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.866097 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.366079369 +0000 UTC m=+132.237577421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967568 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967609 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967644 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.967687 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:21 crc kubenswrapper[5103]: E0130 00:12:21.967994 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.467981553 +0000 UTC m=+132.339479605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:21 crc kubenswrapper[5103]: I0130 00:12:21.968798 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068666 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068908 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068936 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.068967 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.069426 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.569396605 +0000 UTC m=+132.440894657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.069621 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.069864 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.095870 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"community-operators-qj2cx\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: W0130 00:12:22.116684 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf9931f_40f0_4d66_b375_89bec91fd6b8.slice/crio-efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3 WatchSource:0}: Error finding container efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3: Status 404 returned error can't find the container with id efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3 Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.140347 5103 generic.go:358] "Generic (PLEG): container finished" podID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerID="24349bed06372dbea664953971f2bfbebc29c4bd99a219453ea1bb72d5709b02" exitCode=0 Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156594 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-clmhf" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156628 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156643 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerDied","Data":"24349bed06372dbea664953971f2bfbebc29c4bd99a219453ea1bb72d5709b02"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156666 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.156849 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.161677 5103 ???:1] "http: TLS handshake error from 192.168.126.11:51358: no serving certificate available for the kubelet" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.163724 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.170696 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.172180 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.67216284 +0000 UTC m=+132.543660892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.176877 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerStarted","Data":"b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.192196 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerStarted","Data":"61764b58f50ceebb2c7b19c23cfca937d7976fd5804c25d5eefbebe83ee09940"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.197627 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerStarted","Data":"0749980a450b373b44edee6b048d31d5b6409df51186bebecc0a106cf78c36cb"} Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.273918 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.274573 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.274607 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.274739 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.274972 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.774952495 +0000 UTC m=+132.646450547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.288608 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.375962 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.376377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.376443 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.376482 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.377402 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.877383002 +0000 UTC m=+132.748881054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.377667 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.378182 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.379088 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.401016 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"redhat-marketplace-z59s8\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.478034 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.478297 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.478400 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.978377164 +0000 UTC m=+132.849875206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.478641 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.479014 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:22.979005839 +0000 UTC m=+132.850503891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.531534 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.565670 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=3.565647503 podStartE2EDuration="3.565647503s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:12:22.553461587 +0000 UTC m=+132.424959639" watchObservedRunningTime="2026-01-30 00:12:22.565647503 +0000 UTC m=+132.437145555" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.579722 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.580187 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.080168135 +0000 UTC m=+132.951666187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.625332 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:22 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:22 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:22 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.625722 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681174 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") pod \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681303 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") pod \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681337 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") pod \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\" (UID: \"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.681670 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.684423 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume" (OuterVolumeSpecName: "config-volume") pod "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" (UID: "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.685670 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.185642096 +0000 UTC m=+133.057140368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.691974 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw" (OuterVolumeSpecName: "kube-api-access-smxdw") pod "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" (UID: "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06"). InnerVolumeSpecName "kube-api-access-smxdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.701254 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" (UID: "e33f92bd-42f6-4e7b-8176-6ab1c33e6c06"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.776827 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.776922 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.783730 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.784402 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.784432 5103 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.784445 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-smxdw\" (UniqueName: \"kubernetes.io/projected/e33f92bd-42f6-4e7b-8176-6ab1c33e6c06-kube-api-access-smxdw\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.784547 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.284520116 +0000 UTC m=+133.156018168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: W0130 00:12:22.834272 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc312b248_250c_4b33_9c7a_f79c1e73a75b.slice/crio-e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff WatchSource:0}: Error finding container e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff: Status 404 returned error can't find the container with id e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.886509 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.887183 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.387160063 +0000 UTC m=+133.258658125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:22 crc kubenswrapper[5103]: I0130 00:12:22.988591 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:22 crc kubenswrapper[5103]: E0130 00:12:22.988990 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.488970189 +0000 UTC m=+133.360468241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.090718 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.091376 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.59136364 +0000 UTC m=+133.462861692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.192980 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.193326 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.693282868 +0000 UTC m=+133.564780960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.193899 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.194631 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.694611211 +0000 UTC m=+133.566109293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.213669 5103 generic.go:358] "Generic (PLEG): container finished" podID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" exitCode=0 Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.295756 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.295896 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.795876404 +0000 UTC m=+133.667374456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.296086 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.296389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.796381406 +0000 UTC m=+133.667879458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.301621 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.301728 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.397249 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.397487 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.897453274 +0000 UTC m=+133.768951326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.398095 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.398532 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.8985149 +0000 UTC m=+133.770012962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.499333 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.499535 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:23.999496756 +0000 UTC m=+133.870994808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.499978 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.500366 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.000351627 +0000 UTC m=+133.871849679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.586108 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.586411 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.586884 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.588083 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602315 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"c389a8c50911f0500e80c3994452e04998e51f35893361879d7ec4d4c0c6337f"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602358 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602378 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602397 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602411 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerStarted","Data":"d31bb6a2f9fb799d1f7776dc6dbb0a5dcdd009e2858db6301a056354672735ba"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602422 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495520-hdxqw" event={"ID":"e33f92bd-42f6-4e7b-8176-6ab1c33e6c06","Type":"ContainerDied","Data":"22cf5ca5b9dc2b7338a29b6c0ecec87eac0aa4aac8490606aa762bcf17a7311c"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602437 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22cf5ca5b9dc2b7338a29b6c0ecec87eac0aa4aac8490606aa762bcf17a7311c" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602475 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerStarted","Data":"efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602487 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerStarted","Data":"e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff"} Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.602501 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603483 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerName="collect-profiles" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603504 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerName="collect-profiles" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603621 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="e33f92bd-42f6-4e7b-8176-6ab1c33e6c06" containerName="collect-profiles" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.603814 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.603946 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.103922826 +0000 UTC m=+133.975420898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.604301 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.604632 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.104622103 +0000 UTC m=+133.976120165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.623534 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:23 crc kubenswrapper[5103]: [-]has-synced failed: reason withheld Jan 30 00:12:23 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:23 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.623615 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706251 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706473 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706531 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.706564 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.706689 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.206674125 +0000 UTC m=+134.078172177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807727 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807774 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807921 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.807974 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.808290 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.808349 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.808351 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.308330998 +0000 UTC m=+134.179829060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.815707 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.861894 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"redhat-marketplace-xpqb7\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.910664 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.910743 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.410717228 +0000 UTC m=+134.282215290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.910984 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") pod \"84904a2b-f796-4f03-be5b-c5e18c1806fe\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911142 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84904a2b-f796-4f03-be5b-c5e18c1806fe" (UID: "84904a2b-f796-4f03-be5b-c5e18c1806fe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911150 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") pod \"84904a2b-f796-4f03-be5b-c5e18c1806fe\" (UID: \"84904a2b-f796-4f03-be5b-c5e18c1806fe\") " Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911343 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.911862 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84904a2b-f796-4f03-be5b-c5e18c1806fe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:23 crc kubenswrapper[5103]: E0130 00:12:23.912183 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.412155423 +0000 UTC m=+134.283653475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.917940 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84904a2b-f796-4f03-be5b-c5e18c1806fe" (UID: "84904a2b-f796-4f03-be5b-c5e18c1806fe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:23 crc kubenswrapper[5103]: I0130 00:12:23.935714 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.012932 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.013530 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84904a2b-f796-4f03-be5b-c5e18c1806fe-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.014905 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.514884131 +0000 UTC m=+134.386382183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.072715 5103 patch_prober.go:28] interesting pod/console-64d44f6ddf-7v6vx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.072779 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7v6vx" podUID="9bef77c6-141b-4cff-a91d-7515860a6a2a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.115008 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.115607 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.61556858 +0000 UTC m=+134.487066632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: W0130 00:12:24.148983 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d4d4fce_00ed_4163_8a52_864aa4d324e6.slice/crio-7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474 WatchSource:0}: Error finding container 7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474: Status 404 returned error can't find the container with id 7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.217037 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.217392 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.717368196 +0000 UTC m=+134.588866248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.217656 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.218214 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.718182866 +0000 UTC m=+134.589680908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.225374 5103 generic.go:358] "Generic (PLEG): container finished" podID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" exitCode=0 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.227254 5103 generic.go:358] "Generic (PLEG): container finished" podID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerID="9b8ee9cc3437496d869aca397a52ca77f07188d54f568012703f601a70efc9d2" exitCode=0 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.229256 5103 generic.go:358] "Generic (PLEG): container finished" podID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerID="0749980a450b373b44edee6b048d31d5b6409df51186bebecc0a106cf78c36cb" exitCode=0 Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.318790 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.319174 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.819125861 +0000 UTC m=+134.690623933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.319584 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.320088 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.820066734 +0000 UTC m=+134.691564776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.420589 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.420792 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.920753442 +0000 UTC m=+134.792251504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.421568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.421922 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:24.92190787 +0000 UTC m=+134.793405922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.523207 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.523458 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.023394068 +0000 UTC m=+134.894892120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.524131 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.524753 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.02472152 +0000 UTC m=+134.896219612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.624928 5103 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qgd5c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 00:12:24 crc kubenswrapper[5103]: [+]has-synced ok Jan 30 00:12:24 crc kubenswrapper[5103]: [+]process-running ok Jan 30 00:12:24 crc kubenswrapper[5103]: healthz check failed Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.625028 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" podUID="2e4d66cc-52c4-40ae-a23a-4aa4831adfb4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.625550 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.625732 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.125700827 +0000 UTC m=+134.997198919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.626153 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.626599 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.126575198 +0000 UTC m=+134.998073260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.701622 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.701695 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.702318 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.702509 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.702993 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerName="pruner" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.703017 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerName="pruner" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.703176 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="84904a2b-f796-4f03-be5b-c5e18c1806fe" containerName="pruner" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.710618 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754206 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754438 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754490 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.754609 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.757319 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.257292067 +0000 UTC m=+135.128790129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855467 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855513 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855588 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.855608 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.856289 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.356275745 +0000 UTC m=+135.227773797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.856667 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.856717 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.899309 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"redhat-operators-2rjzw\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:24 crc kubenswrapper[5103]: I0130 00:12:24.956666 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:24 crc kubenswrapper[5103]: E0130 00:12:24.957389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.457357963 +0000 UTC m=+135.328856045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.058737 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.059125 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.559107578 +0000 UTC m=+135.430605640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.062108 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.160208 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.160373 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.66033712 +0000 UTC m=+135.531835172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.160711 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.161232 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.661208481 +0000 UTC m=+135.532706533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.261685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.262043 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.762022873 +0000 UTC m=+135.633520925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: W0130 00:12:25.339144 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3bfb26_42f9_43f4_8126_b941aea6ecca.slice/crio-06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26 WatchSource:0}: Error finding container 06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26: Status 404 returned error can't find the container with id 06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26 Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.363766 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.364284 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.86426563 +0000 UTC m=+135.735763682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.465384 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.465547 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.965520203 +0000 UTC m=+135.837018255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.466035 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.466374 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:25.966358633 +0000 UTC m=+135.837856685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.566834 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.567077 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.067032122 +0000 UTC m=+135.938530174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.567200 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.567870 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.067829101 +0000 UTC m=+135.939327153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.668253 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.668379 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.168355036 +0000 UTC m=+136.039853088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.668853 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.669205 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.169196816 +0000 UTC m=+136.040694868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.725372 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.725436 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.725634 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.726323 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736322 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"9b8ee9cc3437496d869aca397a52ca77f07188d54f568012703f601a70efc9d2"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736410 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736435 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736501 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736512 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerDied","Data":"0749980a450b373b44edee6b048d31d5b6409df51186bebecc0a106cf78c36cb"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736530 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736543 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"84904a2b-f796-4f03-be5b-c5e18c1806fe","Type":"ContainerDied","Data":"d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2"} Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736566 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51f4b09576f48504cbc9e34a89612c3d129b4875a15ac23f1e2286ab9442de2" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.736583 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.769642 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.769860 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.269814034 +0000 UTC m=+136.141312086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770316 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770384 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770413 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.770439 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.771018 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.271011553 +0000 UTC m=+136.142509605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.826762 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gdlhx" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.828942 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-qgd5c" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.871731 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.871992 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.371959338 +0000 UTC m=+136.243457400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.872868 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.873029 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.873144 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.873237 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.875242 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.876813 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.878351 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.378318833 +0000 UTC m=+136.249817055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.910220 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"redhat-operators-bhpd7\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.975280 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.975540 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.475488826 +0000 UTC m=+136.346986878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:25 crc kubenswrapper[5103]: I0130 00:12:25.976333 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:25 crc kubenswrapper[5103]: E0130 00:12:25.976719 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.476702036 +0000 UTC m=+136.348200308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.048536 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.078113 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.078606 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.578590034 +0000 UTC m=+136.450088086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.182805 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.183344 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.683330121 +0000 UTC m=+136.554828173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.274524 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerStarted","Data":"06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.279530 5103 generic.go:358] "Generic (PLEG): container finished" podID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" exitCode=0 Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.279664 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.299663 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.300558 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.800530572 +0000 UTC m=+136.672028624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.300686 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.302150 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.802141821 +0000 UTC m=+136.673639873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.304415 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.311839 5103 generic.go:358] "Generic (PLEG): container finished" podID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerID="650d7faa5f4f892e52058d54951c121f7eb03b49005bdf02d2d0dcbf11476748" exitCode=0 Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.312104 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"650d7faa5f4f892e52058d54951c121f7eb03b49005bdf02d2d0dcbf11476748"} Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.405311 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.405900 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.905875864 +0000 UTC m=+136.777373916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.406033 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.407552 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:26.907540954 +0000 UTC m=+136.779039006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.413054 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.420094 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cw4vd" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.420554 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kg2rz" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.420845 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.501848 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.507595 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.510143 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.010106569 +0000 UTC m=+136.881604621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.513503 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-6z46s" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.564493 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:12:26 crc kubenswrapper[5103]: W0130 00:12:26.593803 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096edab0_9031_4bcd_8451_a93417372ee1.slice/crio-d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf WatchSource:0}: Error finding container d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf: Status 404 returned error can't find the container with id d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.614174 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.614613 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.11460083 +0000 UTC m=+136.986098882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.621068 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715118 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") pod \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715223 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") pod \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\" (UID: \"ea21664f-12f0-4c35-bcb0-2f3b355f9153\") " Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715285 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea21664f-12f0-4c35-bcb0-2f3b355f9153" (UID: "ea21664f-12f0-4c35-bcb0-2f3b355f9153"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715383 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.715648 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.716737 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.216696064 +0000 UTC m=+137.088194116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.722532 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea21664f-12f0-4c35-bcb0-2f3b355f9153" (UID: "ea21664f-12f0-4c35-bcb0-2f3b355f9153"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.817177 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.817800 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.317782732 +0000 UTC m=+137.189280784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.817862 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea21664f-12f0-4c35-bcb0-2f3b355f9153-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.918696 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.918872 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.41884621 +0000 UTC m=+137.290344252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:26 crc kubenswrapper[5103]: I0130 00:12:26.919281 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:26 crc kubenswrapper[5103]: E0130 00:12:26.919629 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.419617049 +0000 UTC m=+137.291115101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.021014 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.021187 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.521158239 +0000 UTC m=+137.392656281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.021858 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.022338 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.522315367 +0000 UTC m=+137.393813469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.122658 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.122820 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.622788791 +0000 UTC m=+137.494286853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.123238 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.123600 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.62358361 +0000 UTC m=+137.495081812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.224298 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.224513 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.724481434 +0000 UTC m=+137.595979486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.224784 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.225077 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.725064218 +0000 UTC m=+137.596562270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.323624 5103 generic.go:358] "Generic (PLEG): container finished" podID="096edab0-9031-4bcd-8451-a93417372ee1" containerID="e6a329d39509762784caccc32b4323411f00c0a9bfd035635c251413ddb2d332" exitCode=0 Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.323751 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"e6a329d39509762784caccc32b4323411f00c0a9bfd035635c251413ddb2d332"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.323787 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerStarted","Data":"d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.326674 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ea21664f-12f0-4c35-bcb0-2f3b355f9153","Type":"ContainerDied","Data":"1f97cdac6963b6b6cb50799044e9ac18b855d2c1635c1f04b350e48382eb7d0f"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.326694 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.326696 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f97cdac6963b6b6cb50799044e9ac18b855d2c1635c1f04b350e48382eb7d0f" Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.328014 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.328232 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.828208827 +0000 UTC m=+137.699706889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.328366 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.329679 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.829666952 +0000 UTC m=+137.701165014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.332642 5103 generic.go:358] "Generic (PLEG): container finished" podID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" exitCode=0 Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.332741 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.335362 5103 generic.go:358] "Generic (PLEG): container finished" podID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerID="487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1" exitCode=0 Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.335856 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1"} Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.431105 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.431687 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.931595861 +0000 UTC m=+137.803093933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.432214 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.434651 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:27.934627115 +0000 UTC m=+137.806125167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.534821 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.535905 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.035869887 +0000 UTC m=+137.907367979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.637391 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.637846 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.137829487 +0000 UTC m=+138.009327529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.740706 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.740987 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.240910794 +0000 UTC m=+138.112408846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.741626 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.742064 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.242041612 +0000 UTC m=+138.113539664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.843647 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.843894 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.343849188 +0000 UTC m=+138.215347240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.844387 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.844852 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.344844062 +0000 UTC m=+138.216342114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.946240 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.946485 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.446427473 +0000 UTC m=+138.317925535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:27 crc kubenswrapper[5103]: I0130 00:12:27.946763 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:27 crc kubenswrapper[5103]: E0130 00:12:27.947272 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.447249193 +0000 UTC m=+138.318747245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.048034 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.048286 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.548257929 +0000 UTC m=+138.419755981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.048693 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.049087 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.549042178 +0000 UTC m=+138.420540230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.150317 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.150575 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.650535386 +0000 UTC m=+138.522033448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.151111 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.151590 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.651572482 +0000 UTC m=+138.523070534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.252605 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.252911 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.752855285 +0000 UTC m=+138.624353347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.253258 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.253703 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.753680185 +0000 UTC m=+138.625178237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.349017 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.351436 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.353385 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.355528 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.355717 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.855686666 +0000 UTC m=+138.727184708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.357787 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.360812 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.360897 5103 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.370160 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.870135807 +0000 UTC m=+138.741633859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.462403 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.462664 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.962633787 +0000 UTC m=+138.834131839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.462880 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.463351 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:28.963333124 +0000 UTC m=+138.834831286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.564495 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.564708 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.064665309 +0000 UTC m=+138.936163371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.565123 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.565534 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.065516929 +0000 UTC m=+138.937014991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.669489 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.670079 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.170039652 +0000 UTC m=+139.041537704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.771272 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.771610 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.271598222 +0000 UTC m=+139.143096274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.871958 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.872133 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.372104886 +0000 UTC m=+139.243602948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.872658 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.873029 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.373013428 +0000 UTC m=+139.244511480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.974301 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.974657 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.474596799 +0000 UTC m=+139.346094851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:28 crc kubenswrapper[5103]: I0130 00:12:28.975085 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:28 crc kubenswrapper[5103]: E0130 00:12:28.975482 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.47546247 +0000 UTC m=+139.346960722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.076114 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.076279 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.576244751 +0000 UTC m=+139.447742803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.076708 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.077101 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.577042161 +0000 UTC m=+139.448540213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.178216 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.178573 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.678511599 +0000 UTC m=+139.550009651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.178873 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.179297 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.679277637 +0000 UTC m=+139.550775829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.280223 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.280436 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.780392867 +0000 UTC m=+139.651891049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.281371 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.281822 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.781808881 +0000 UTC m=+139.653306933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.385842 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.386074 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.886017896 +0000 UTC m=+139.757515948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.386952 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.387437 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.887412319 +0000 UTC m=+139.758910551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.489060 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.489362 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.989314258 +0000 UTC m=+139.860812310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.489578 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.490040 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:29.990018555 +0000 UTC m=+139.861516597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.555647 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.595845 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.599179 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.099127279 +0000 UTC m=+139.970625331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.599568 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.600136 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.100111433 +0000 UTC m=+139.971609485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.701077 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.701394 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.201353255 +0000 UTC m=+140.072851317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.701535 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.702537 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.202523114 +0000 UTC m=+140.074021366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.803241 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.803731 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.303711005 +0000 UTC m=+140.175209057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:29 crc kubenswrapper[5103]: I0130 00:12:29.905313 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:29 crc kubenswrapper[5103]: E0130 00:12:29.905984 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.405961592 +0000 UTC m=+140.277459644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.006445 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.006729 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.506673901 +0000 UTC m=+140.378171953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.007030 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.007539 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.507505831 +0000 UTC m=+140.379003883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.108216 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.108653 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.608637161 +0000 UTC m=+140.480135203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.210010 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.210517 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.710445257 +0000 UTC m=+140.581943309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.310981 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.311310 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.81128114 +0000 UTC m=+140.682779192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.412525 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.412961 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:30.912939882 +0000 UTC m=+140.784437934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.513937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.514227 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.014191175 +0000 UTC m=+140.885689227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.514669 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.515011 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.015002285 +0000 UTC m=+140.886500337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.616258 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.616410 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.116387821 +0000 UTC m=+140.987885883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.616508 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.616629 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.618817 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.618837 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.633763 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.634677 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.667870 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.717716 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.717819 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.717841 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.718178 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.218156516 +0000 UTC m=+141.089654578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.721541 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.729448 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.745297 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.745675 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.819338 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.819512 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.31948262 +0000 UTC m=+141.190980672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.820010 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.820534 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.320515555 +0000 UTC m=+141.192013607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.921218 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.921396 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.421370258 +0000 UTC m=+141.292868310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.921643 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:30 crc kubenswrapper[5103]: E0130 00:12:30.922311 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.42227249 +0000 UTC m=+141.293770552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.932898 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:12:30 crc kubenswrapper[5103]: I0130 00:12:30.942288 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.022390 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.022542 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.522518188 +0000 UTC m=+141.394016240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.022697 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.023079 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.523038291 +0000 UTC m=+141.394536343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.124516 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.124699 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.624671253 +0000 UTC m=+141.496169295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.124926 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.125320 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.625303568 +0000 UTC m=+141.496801620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.226391 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.226532 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.72651271 +0000 UTC m=+141.598010762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.226698 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.226973 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.726962361 +0000 UTC m=+141.598460413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.328090 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.82806121 +0000 UTC m=+141.699559272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.328099 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.328580 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.328909 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.82889811 +0000 UTC m=+141.700396162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.386595 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"d6c29e6a0e420d4cd531b73b85ba8abd78aeceb53c509110c477fb6b2fad95e9"} Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.430558 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.430981 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:31.930951372 +0000 UTC m=+141.802449424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.532769 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.533146 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.033129707 +0000 UTC m=+141.904627759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.635519 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.636389 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.136350447 +0000 UTC m=+142.007848489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.640385 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.641239 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.141217955 +0000 UTC m=+142.012716007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.703825 5103 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.741564 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.742426 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.242380266 +0000 UTC m=+142.113878438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.843360 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.843413 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.843774 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.343761992 +0000 UTC m=+142.215260044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.845884 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.859573 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/566ee5b2-938f-41f6-8625-e8a987181d60-metrics-certs\") pod \"network-metrics-daemon-vsrcq\" (UID: \"566ee5b2-938f-41f6-8625-e8a987181d60\") " pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.944502 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.944780 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.444741528 +0000 UTC m=+142.316239590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:31 crc kubenswrapper[5103]: I0130 00:12:31.945121 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:31 crc kubenswrapper[5103]: E0130 00:12:31.945633 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.445617439 +0000 UTC m=+142.317115491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jfm6p" (UID: "d69ff998-a349-40e4-8653-bfded7d60952") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.046323 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5103]: E0130 00:12:32.046483 5103 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-30 00:12:32.546462062 +0000 UTC m=+142.417960114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.074783 5103 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T00:12:31.703854569Z","UUID":"c3e61ce8-6247-4f60-95a2-118b5bac39b0","Handler":null,"Name":"","Endpoint":""} Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.079329 5103 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.079360 5103 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.111116 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.112185 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vsrcq" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.147532 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.164766 5103 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.164827 5103 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.205347 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jfm6p\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.248718 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.254643 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.320354 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.328808 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.424675 5103 ???:1] "http: TLS handshake error from 192.168.126.11:58636: no serving certificate available for the kubelet" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.775396 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.775463 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.775509 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776029 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75"} pod="openshift-console/downloads-747b44746d-j77tr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776097 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" containerID="cri-o://cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75" gracePeriod=2 Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776512 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.776670 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:32 crc kubenswrapper[5103]: I0130 00:12:32.877854 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.068747 5103 patch_prober.go:28] interesting pod/console-64d44f6ddf-7v6vx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.068847 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7v6vx" podUID="9bef77c6-141b-4cff-a91d-7515860a6a2a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.406359 5103 generic.go:358] "Generic (PLEG): container finished" podID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerID="cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75" exitCode=0 Jan 30 00:12:34 crc kubenswrapper[5103]: I0130 00:12:34.406456 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerDied","Data":"cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75"} Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.351759 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.355016 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.357311 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:38 crc kubenswrapper[5103]: E0130 00:12:38.357357 5103 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:42 crc kubenswrapper[5103]: I0130 00:12:42.778703 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:42 crc kubenswrapper[5103]: I0130 00:12:42.779668 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:44 crc kubenswrapper[5103]: I0130 00:12:44.105829 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:44 crc kubenswrapper[5103]: I0130 00:12:44.114673 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-7v6vx" Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.350286 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.351917 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.353732 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:48 crc kubenswrapper[5103]: E0130 00:12:48.353818 5103 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:49 crc kubenswrapper[5103]: I0130 00:12:49.553234 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-mv94c" Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.521127 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cnbd2_e1617c52-82bc-4480-9bc4-e37e0264876e/kube-multus-additional-cni-plugins/0.log" Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.521594 5103 generic.go:358] "Generic (PLEG): container finished" podID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" exitCode=137 Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.521781 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerDied","Data":"6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9"} Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.777413 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.777531 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:12:52 crc kubenswrapper[5103]: I0130 00:12:52.943647 5103 ???:1] "http: TLS handshake error from 192.168.126.11:59950: no serving certificate available for the kubelet" Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.346581 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.347742 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.348369 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 30 00:12:58 crc kubenswrapper[5103]: E0130 00:12:58.348435 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.335598 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.337269 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerName="pruner" Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.337423 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerName="pruner" Jan 30 00:12:59 crc kubenswrapper[5103]: I0130 00:12:59.337623 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea21664f-12f0-4c35-bcb0-2f3b355f9153" containerName="pruner" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.627628 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.627822 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.631430 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.633273 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.736858 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.736936 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.838161 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.838653 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.838400 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.857789 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:01 crc kubenswrapper[5103]: I0130 00:13:01.960400 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:02 crc kubenswrapper[5103]: I0130 00:13:02.777397 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:02 crc kubenswrapper[5103]: I0130 00:13:02.777916 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.527694 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.729380 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.729667 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.797136 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.797211 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.797249 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899330 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899429 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899515 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899589 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.899624 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:05 crc kubenswrapper[5103]: I0130 00:13:05.925179 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"installer-12-crc\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:06 crc kubenswrapper[5103]: I0130 00:13:06.047578 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:13:07 crc kubenswrapper[5103]: I0130 00:13:07.935095 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cnbd2_e1617c52-82bc-4480-9bc4-e37e0264876e/kube-multus-additional-cni-plugins/0.log" Jan 30 00:13:07 crc kubenswrapper[5103]: I0130 00:13:07.935225 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028540 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028693 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028952 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.028993 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029141 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") pod \"e1617c52-82bc-4480-9bc4-e37e0264876e\" (UID: \"e1617c52-82bc-4480-9bc4-e37e0264876e\") " Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029556 5103 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1617c52-82bc-4480-9bc4-e37e0264876e-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029830 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.029820 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready" (OuterVolumeSpecName: "ready") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.039881 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv" (OuterVolumeSpecName: "kube-api-access-lbkqv") pod "e1617c52-82bc-4480-9bc4-e37e0264876e" (UID: "e1617c52-82bc-4480-9bc4-e37e0264876e"). InnerVolumeSpecName "kube-api-access-lbkqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.131789 5103 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/e1617c52-82bc-4480-9bc4-e37e0264876e-ready\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.131839 5103 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e1617c52-82bc-4480-9bc4-e37e0264876e-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.131861 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lbkqv\" (UniqueName: \"kubernetes.io/projected/e1617c52-82bc-4480-9bc4-e37e0264876e-kube-api-access-lbkqv\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:08 crc kubenswrapper[5103]: W0130 00:13:08.498414 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa WatchSource:0}: Error finding container 15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa: Status 404 returned error can't find the container with id 15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.574368 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vsrcq"] Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.640929 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.641882 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" event={"ID":"566ee5b2-938f-41f6-8625-e8a987181d60","Type":"ContainerStarted","Data":"d4608abd8fe0941f7b6442e65d03e4a4c7fe4f59ac5332172c75cf635de5a05a"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643187 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-cnbd2_e1617c52-82bc-4480-9bc4-e37e0264876e/kube-multus-additional-cni-plugins/0.log" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643306 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" event={"ID":"e1617c52-82bc-4480-9bc4-e37e0264876e","Type":"ContainerDied","Data":"973863cd6d6133ec3ff6a7fd2a13f58a8dd52f466be2fd39e8f85026734e7547"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643328 5103 scope.go:117] "RemoveContainer" containerID="6a73c2ab4be6819ee06000b51e4278a6074aa559820c7842a7089b60756b47e9" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.643389 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-cnbd2" Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.653552 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"15e8ad65b1b52db89bfe310060212442902e4d367186e248b32d329456326bfa"} Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.684227 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:13:08 crc kubenswrapper[5103]: I0130 00:13:08.687962 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-cnbd2"] Jan 30 00:13:08 crc kubenswrapper[5103]: W0130 00:13:08.846187 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019 WatchSource:0}: Error finding container bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019: Status 404 returned error can't find the container with id bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019 Jan 30 00:13:08 crc kubenswrapper[5103]: W0130 00:13:08.847386 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd69ff998_a349_40e4_8653_bfded7d60952.slice/crio-4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d WatchSource:0}: Error finding container 4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d: Status 404 returned error can't find the container with id 4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d Jan 30 00:13:09 crc kubenswrapper[5103]: W0130 00:13:09.026346 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda8e87128_3548_4aa6_97ae_4fbdebabb51b.slice/crio-9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2 WatchSource:0}: Error finding container 9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2: Status 404 returned error can't find the container with id 9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2 Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.666467 5103 generic.go:358] "Generic (PLEG): container finished" podID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" exitCode=0 Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.673851 5103 generic.go:358] "Generic (PLEG): container finished" podID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" exitCode=0 Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.716724 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:09 crc kubenswrapper[5103]: I0130 00:13:09.716798 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274041 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerStarted","Data":"d26e5dfec08469fb58fbea2e743d80f53ef5ef562fb067297ab9ef35b80c7464"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274112 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274195 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.274216 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerStarted","Data":"9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.283678 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" path="/var/lib/kubelet/pods/e1617c52-82bc-4480-9bc4-e37e0264876e/volumes" Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284355 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerStarted","Data":"4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284389 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284405 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284419 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"bda91773f6a5fc8744515643c8b1fdcb9b6ee6637bc16770952e46645f05d019"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284432 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284442 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"64cb928b977091387595e423e5a54903621b359f7992c380e3153e8a477eefa3"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284456 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.284475 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"a1300b5c3d788e6b60e029e3a403486ce2ec566c355064d07eac6df679192d2b"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.699320 5103 generic.go:358] "Generic (PLEG): container finished" podID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.699450 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.703203 5103 generic.go:358] "Generic (PLEG): container finished" podID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerID="b3c22393fbe801c108dcbbddf3bbfaff0479ecc6408293676bc4b5895feac0f7" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.703383 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"b3c22393fbe801c108dcbbddf3bbfaff0479ecc6408293676bc4b5895feac0f7"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.705353 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"4d5ffd7684d68fe0303385b117e52f80b8bcedc1577f8188ae6e3d7ce592db56"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.707541 5103 generic.go:358] "Generic (PLEG): container finished" podID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerID="f2c861c6db2293d2cedb84994b2e896c4b940bfa88eb6866c500110833076dd3" exitCode=0 Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.707775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"f2c861c6db2293d2cedb84994b2e896c4b940bfa88eb6866c500110833076dd3"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.711041 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerStarted","Data":"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708"} Jan 30 00:13:10 crc kubenswrapper[5103]: I0130 00:13:10.717681 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2"} Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.004807 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.004894 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.726421 5103 generic.go:358] "Generic (PLEG): container finished" podID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" exitCode=0 Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.726518 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708"} Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.731139 5103 generic.go:358] "Generic (PLEG): container finished" podID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerID="abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2" exitCode=0 Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.731363 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2"} Jan 30 00:13:11 crc kubenswrapper[5103]: I0130 00:13:11.733031 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerStarted","Data":"ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f"} Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.741419 5103 generic.go:358] "Generic (PLEG): container finished" podID="096edab0-9031-4bcd-8451-a93417372ee1" containerID="ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f" exitCode=0 Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.741499 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f"} Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.743735 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"be400c94a1a85e05f6226e648d9c94032f43d6ef128d7bb3dc7c74aff25e68bd"} Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.775321 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:12 crc kubenswrapper[5103]: I0130 00:13:12.775389 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:13 crc kubenswrapper[5103]: I0130 00:13:13.751587 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" event={"ID":"566ee5b2-938f-41f6-8625-e8a987181d60","Type":"ContainerStarted","Data":"a42e6af7e4fdd14b0555dbc45cc5b48df70e1022fde98251062f220847d01610"} Jan 30 00:13:14 crc kubenswrapper[5103]: I0130 00:13:14.758205 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"2a1d2a6fd9b0415c90f46e84f4dbf0c0ca79a15746a84e5e6dd0f2a6d613540a"} Jan 30 00:13:14 crc kubenswrapper[5103]: I0130 00:13:14.759901 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerStarted","Data":"7d8a9b02754e84af20022228fe1cad64203a3b40b3a8196d40d777d92317e4f3"} Jan 30 00:13:14 crc kubenswrapper[5103]: I0130 00:13:14.761575 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerStarted","Data":"ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3"} Jan 30 00:13:15 crc kubenswrapper[5103]: I0130 00:13:15.173830 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:16 crc kubenswrapper[5103]: I0130 00:13:16.395561 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:13:16 crc kubenswrapper[5103]: I0130 00:13:16.432661 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" podStartSLOduration=163.432630167 podStartE2EDuration="2m43.432630167s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:16.428706891 +0000 UTC m=+186.300205003" watchObservedRunningTime="2026-01-30 00:13:16.432630167 +0000 UTC m=+186.304128259" Jan 30 00:13:16 crc kubenswrapper[5103]: I0130 00:13:16.776543 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerStarted","Data":"38b7745e7a20717b902287280a735e1afd97d82ade62ae000080b58a64c6ef28"} Jan 30 00:13:17 crc kubenswrapper[5103]: I0130 00:13:17.787461 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" event={"ID":"fe0b1692-3dd7-4854-b53d-c32cd8162e1b","Type":"ContainerStarted","Data":"d53e734783ab297ba9e52fe92a54022392d3212be964e499bd29b942fa8453ef"} Jan 30 00:13:18 crc kubenswrapper[5103]: I0130 00:13:18.798168 5103 generic.go:358] "Generic (PLEG): container finished" podID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerID="38b7745e7a20717b902287280a735e1afd97d82ade62ae000080b58a64c6ef28" exitCode=0 Jan 30 00:13:18 crc kubenswrapper[5103]: I0130 00:13:18.798282 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerDied","Data":"38b7745e7a20717b902287280a735e1afd97d82ade62ae000080b58a64c6ef28"} Jan 30 00:13:18 crc kubenswrapper[5103]: I0130 00:13:18.803942 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerStarted","Data":"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39"} Jan 30 00:13:19 crc kubenswrapper[5103]: I0130 00:13:19.815815 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerStarted","Data":"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147"} Jan 30 00:13:19 crc kubenswrapper[5103]: I0130 00:13:19.923564 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=20.923540831 podStartE2EDuration="20.923540831s" podCreationTimestamp="2026-01-30 00:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:19.919099493 +0000 UTC m=+189.790597615" watchObservedRunningTime="2026-01-30 00:13:19.923540831 +0000 UTC m=+189.795038893" Jan 30 00:13:20 crc kubenswrapper[5103]: I0130 00:13:20.827607 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerStarted","Data":"61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483"} Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.006218 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.006741 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.425667 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-69ms4" podStartSLOduration=80.425630105 podStartE2EDuration="1m20.425630105s" podCreationTimestamp="2026-01-30 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:21.422141 +0000 UTC m=+191.293639062" watchObservedRunningTime="2026-01-30 00:13:21.425630105 +0000 UTC m=+191.297128207" Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.445530 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=16.445505928 podStartE2EDuration="16.445505928s" podCreationTimestamp="2026-01-30 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:21.440076166 +0000 UTC m=+191.311574228" watchObservedRunningTime="2026-01-30 00:13:21.445505928 +0000 UTC m=+191.317003990" Jan 30 00:13:21 crc kubenswrapper[5103]: I0130 00:13:21.840361 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerStarted","Data":"181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb"} Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.336644 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbjkv" podStartSLOduration=20.047607022 podStartE2EDuration="1m2.336605691s" podCreationTimestamp="2026-01-30 00:12:20 +0000 UTC" firstStartedPulling="2026-01-30 00:12:24.702890404 +0000 UTC m=+134.574388496" lastFinishedPulling="2026-01-30 00:13:06.991889073 +0000 UTC m=+176.863387165" observedRunningTime="2026-01-30 00:13:22.328544245 +0000 UTC m=+192.200042357" watchObservedRunningTime="2026-01-30 00:13:22.336605691 +0000 UTC m=+192.208103783" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.377547 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z59s8" podStartSLOduration=19.743292413 podStartE2EDuration="1m1.377519916s" podCreationTimestamp="2026-01-30 00:12:21 +0000 UTC" firstStartedPulling="2026-01-30 00:12:26.281350835 +0000 UTC m=+136.152848887" lastFinishedPulling="2026-01-30 00:13:07.915578298 +0000 UTC m=+177.787076390" observedRunningTime="2026-01-30 00:13:22.374191685 +0000 UTC m=+192.245689797" watchObservedRunningTime="2026-01-30 00:13:22.377519916 +0000 UTC m=+192.249017988" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.478604 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.479009 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.775527 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.775631 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:22 crc kubenswrapper[5103]: I0130 00:13:22.850900 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerStarted","Data":"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858"} Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.054326 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.096882 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") pod \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.096998 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") pod \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\" (UID: \"a8e87128-3548-4aa6-97ae-4fbdebabb51b\") " Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.097459 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8e87128-3548-4aa6-97ae-4fbdebabb51b" (UID: "a8e87128-3548-4aa6-97ae-4fbdebabb51b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.108560 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8e87128-3548-4aa6-97ae-4fbdebabb51b" (UID: "a8e87128-3548-4aa6-97ae-4fbdebabb51b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.153200 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qj2cx" podStartSLOduration=20.937751093 podStartE2EDuration="1m3.153185641s" podCreationTimestamp="2026-01-30 00:12:20 +0000 UTC" firstStartedPulling="2026-01-30 00:12:26.31279064 +0000 UTC m=+136.184288692" lastFinishedPulling="2026-01-30 00:13:08.528225188 +0000 UTC m=+178.399723240" observedRunningTime="2026-01-30 00:13:23.151771437 +0000 UTC m=+193.023269499" watchObservedRunningTime="2026-01-30 00:13:23.153185641 +0000 UTC m=+193.024683693" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.173231 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzx54" podStartSLOduration=20.380790706 podStartE2EDuration="1m3.173210238s" podCreationTimestamp="2026-01-30 00:12:20 +0000 UTC" firstStartedPulling="2026-01-30 00:12:25.726162762 +0000 UTC m=+135.597660814" lastFinishedPulling="2026-01-30 00:13:08.518582294 +0000 UTC m=+178.390080346" observedRunningTime="2026-01-30 00:13:23.170728588 +0000 UTC m=+193.042226730" watchObservedRunningTime="2026-01-30 00:13:23.173210238 +0000 UTC m=+193.044708290" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.198649 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.198709 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e87128-3548-4aa6-97ae-4fbdebabb51b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.858373 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a8e87128-3548-4aa6-97ae-4fbdebabb51b","Type":"ContainerDied","Data":"9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2"} Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.859781 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ca6233bea7004be5a192952ea083905d30a402b99b8bda22756390d149198c2" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.858526 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 30 00:13:23 crc kubenswrapper[5103]: I0130 00:13:23.860801 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerStarted","Data":"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0"} Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.138408 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-z59s8" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" probeResult="failure" output=< Jan 30 00:13:25 crc kubenswrapper[5103]: timeout: failed to connect service ":50051" within 1s Jan 30 00:13:25 crc kubenswrapper[5103]: > Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.472624 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2rjzw" podStartSLOduration=21.267830466 podStartE2EDuration="1m2.472607243s" podCreationTimestamp="2026-01-30 00:12:23 +0000 UTC" firstStartedPulling="2026-01-30 00:12:27.333482525 +0000 UTC m=+137.204980597" lastFinishedPulling="2026-01-30 00:13:08.538259322 +0000 UTC m=+178.409757374" observedRunningTime="2026-01-30 00:13:25.471480076 +0000 UTC m=+195.342978138" watchObservedRunningTime="2026-01-30 00:13:25.472607243 +0000 UTC m=+195.344105295" Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.502489 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7c7gb" podStartSLOduration=21.76005582 podStartE2EDuration="1m6.502467979s" podCreationTimestamp="2026-01-30 00:12:19 +0000 UTC" firstStartedPulling="2026-01-30 00:12:23.587271191 +0000 UTC m=+133.458769243" lastFinishedPulling="2026-01-30 00:13:08.32968334 +0000 UTC m=+178.201181402" observedRunningTime="2026-01-30 00:13:25.499933008 +0000 UTC m=+195.371431070" watchObservedRunningTime="2026-01-30 00:13:25.502467979 +0000 UTC m=+195.373966031" Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.872747 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vsrcq" event={"ID":"566ee5b2-938f-41f6-8625-e8a987181d60","Type":"ContainerStarted","Data":"0259a9e9eace4fc172ce32f2b8eecdb8ae6d65184d193746feff43d1d4feb368"} Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.874775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerStarted","Data":"372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085"} Jan 30 00:13:25 crc kubenswrapper[5103]: I0130 00:13:25.892464 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xpqb7" podStartSLOduration=22.711067996 podStartE2EDuration="1m3.892441114s" podCreationTimestamp="2026-01-30 00:12:22 +0000 UTC" firstStartedPulling="2026-01-30 00:12:27.336634642 +0000 UTC m=+137.208132694" lastFinishedPulling="2026-01-30 00:13:08.51800775 +0000 UTC m=+178.389505812" observedRunningTime="2026-01-30 00:13:25.890319203 +0000 UTC m=+195.761817255" watchObservedRunningTime="2026-01-30 00:13:25.892441114 +0000 UTC m=+195.763939186" Jan 30 00:13:26 crc kubenswrapper[5103]: I0130 00:13:26.882545 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerStarted","Data":"3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a"} Jan 30 00:13:26 crc kubenswrapper[5103]: I0130 00:13:26.897317 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vsrcq" podStartSLOduration=173.897294844 podStartE2EDuration="2m53.897294844s" podCreationTimestamp="2026-01-30 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:13:26.895504321 +0000 UTC m=+196.767002413" watchObservedRunningTime="2026-01-30 00:13:26.897294844 +0000 UTC m=+196.768792906" Jan 30 00:13:27 crc kubenswrapper[5103]: I0130 00:13:27.924467 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bhpd7" podStartSLOduration=23.69411163 podStartE2EDuration="1m4.924446219s" podCreationTimestamp="2026-01-30 00:12:23 +0000 UTC" firstStartedPulling="2026-01-30 00:12:27.324611979 +0000 UTC m=+137.196110031" lastFinishedPulling="2026-01-30 00:13:08.554946568 +0000 UTC m=+178.426444620" observedRunningTime="2026-01-30 00:13:27.921341293 +0000 UTC m=+197.792839355" watchObservedRunningTime="2026-01-30 00:13:27.924446219 +0000 UTC m=+197.795944281" Jan 30 00:13:29 crc kubenswrapper[5103]: I0130 00:13:29.897847 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:13:30 crc kubenswrapper[5103]: I0130 00:13:30.883887 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:30 crc kubenswrapper[5103]: I0130 00:13:30.884839 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:30 crc kubenswrapper[5103]: I0130 00:13:30.972509 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.005044 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.005493 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.451011 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.452773 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.514696 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.816815 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.817293 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.869019 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.957408 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.964157 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:31 crc kubenswrapper[5103]: I0130 00:13:31.978600 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.289827 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.289883 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.343420 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.529410 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.570900 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.775977 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.776362 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.776421 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.776920 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.777068 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.777003 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8"} pod="openshift-console/downloads-747b44746d-j77tr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.777222 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" containerID="cri-o://fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8" gracePeriod=2 Jan 30 00:13:32 crc kubenswrapper[5103]: I0130 00:13:32.958351 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:33 crc kubenswrapper[5103]: I0130 00:13:33.931916 5103 ???:1] "http: TLS handshake error from 192.168.126.11:54516: no serving certificate available for the kubelet" Jan 30 00:13:33 crc kubenswrapper[5103]: I0130 00:13:33.937186 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:33 crc kubenswrapper[5103]: I0130 00:13:33.938379 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.016122 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.211396 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.212396 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vzx54" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" containerID="cri-o://181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb" gracePeriod=2 Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.936388 5103 generic.go:358] "Generic (PLEG): container finished" podID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerID="fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8" exitCode=0 Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.937426 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerDied","Data":"fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8"} Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.937603 5103 scope.go:117] "RemoveContainer" containerID="cf440fc95fced9c1dec5f756ce0700f4d01d4bcefdae5034ff9f16546ffccb75" Jan 30 00:13:34 crc kubenswrapper[5103]: I0130 00:13:34.992760 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:35 crc kubenswrapper[5103]: I0130 00:13:35.062781 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:35 crc kubenswrapper[5103]: I0130 00:13:35.062868 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:35 crc kubenswrapper[5103]: I0130 00:13:35.132231 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.011960 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.049747 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.049832 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.105781 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.615239 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:13:36 crc kubenswrapper[5103]: I0130 00:13:36.618946 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qj2cx" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" containerID="cri-o://61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" gracePeriod=2 Jan 30 00:13:37 crc kubenswrapper[5103]: I0130 00:13:37.018875 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:37 crc kubenswrapper[5103]: I0130 00:13:37.978391 5103 generic.go:358] "Generic (PLEG): container finished" podID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerID="181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb" exitCode=0 Jan 30 00:13:37 crc kubenswrapper[5103]: I0130 00:13:37.978518 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.832444 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.932803 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") pod \"faf9931f-40f0-4d66-b375-89bec91fd6b8\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.932948 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") pod \"faf9931f-40f0-4d66-b375-89bec91fd6b8\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.932973 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") pod \"faf9931f-40f0-4d66-b375-89bec91fd6b8\" (UID: \"faf9931f-40f0-4d66-b375-89bec91fd6b8\") " Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.935399 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities" (OuterVolumeSpecName: "utilities") pod "faf9931f-40f0-4d66-b375-89bec91fd6b8" (UID: "faf9931f-40f0-4d66-b375-89bec91fd6b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.939632 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25" (OuterVolumeSpecName: "kube-api-access-ftd25") pod "faf9931f-40f0-4d66-b375-89bec91fd6b8" (UID: "faf9931f-40f0-4d66-b375-89bec91fd6b8"). InnerVolumeSpecName "kube-api-access-ftd25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.992303 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.995007 5103 generic.go:358] "Generic (PLEG): container finished" podID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" exitCode=0 Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.995127 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.998642 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzx54" event={"ID":"faf9931f-40f0-4d66-b375-89bec91fd6b8","Type":"ContainerDied","Data":"efd1833ce5dba5fe0d8d29ba5d602d25ca88ad5bac471d3550f1eabc547727e3"} Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.998663 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzx54" Jan 30 00:13:38 crc kubenswrapper[5103]: I0130 00:13:38.998729 5103 scope.go:117] "RemoveContainer" containerID="181f0aa32a1598e1078e06c18371f78494d47cd1fbe974edab50a99336c9d2fb" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.019604 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.020018 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xpqb7" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" containerID="cri-o://372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" gracePeriod=2 Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.020599 5103 scope.go:117] "RemoveContainer" containerID="f2c861c6db2293d2cedb84994b2e896c4b940bfa88eb6866c500110833076dd3" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.034931 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.034977 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftd25\" (UniqueName: \"kubernetes.io/projected/faf9931f-40f0-4d66-b375-89bec91fd6b8-kube-api-access-ftd25\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.041879 5103 scope.go:117] "RemoveContainer" containerID="9b8ee9cc3437496d869aca397a52ca77f07188d54f568012703f601a70efc9d2" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.255311 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faf9931f-40f0-4d66-b375-89bec91fd6b8" (UID: "faf9931f-40f0-4d66-b375-89bec91fd6b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.339317 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.340493 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9931f-40f0-4d66-b375-89bec91fd6b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.342812 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vzx54"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.622557 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:13:39 crc kubenswrapper[5103]: I0130 00:13:39.622901 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bhpd7" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" containerID="cri-o://3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" gracePeriod=2 Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.009866 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.010215 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.010288 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:40 crc kubenswrapper[5103]: I0130 00:13:40.884986 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" path="/var/lib/kubelet/pods/faf9931f-40f0-4d66-b375-89bec91fd6b8/volumes" Jan 30 00:13:41 crc kubenswrapper[5103]: I0130 00:13:41.017352 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:41 crc kubenswrapper[5103]: I0130 00:13:41.017440 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:42 crc kubenswrapper[5103]: I0130 00:13:42.775073 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:42 crc kubenswrapper[5103]: I0130 00:13:42.775185 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.921377 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.922637 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.923082 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:42 crc kubenswrapper[5103]: E0130 00:13:42.923118 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-qj2cx" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" probeResult="unknown" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.385037 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.503309 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") pod \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.503442 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") pod \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.503619 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") pod \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\" (UID: \"3ce63351-9fca-4e0e-b4fb-3032a983ebcc\") " Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.505088 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities" (OuterVolumeSpecName: "utilities") pod "3ce63351-9fca-4e0e-b4fb-3032a983ebcc" (UID: "3ce63351-9fca-4e0e-b4fb-3032a983ebcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.510509 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn" (OuterVolumeSpecName: "kube-api-access-2rhcn") pod "3ce63351-9fca-4e0e-b4fb-3032a983ebcc" (UID: "3ce63351-9fca-4e0e-b4fb-3032a983ebcc"). InnerVolumeSpecName "kube-api-access-2rhcn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.606109 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2rhcn\" (UniqueName: \"kubernetes.io/projected/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-kube-api-access-2rhcn\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:43 crc kubenswrapper[5103]: I0130 00:13:43.606181 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.939255 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.940463 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.940886 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:44 crc kubenswrapper[5103]: E0130 00:13:44.941182 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xpqb7" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" probeResult="unknown" Jan 30 00:13:45 crc kubenswrapper[5103]: I0130 00:13:45.614937 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ce63351-9fca-4e0e-b4fb-3032a983ebcc" (UID: "3ce63351-9fca-4e0e-b4fb-3032a983ebcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:45 crc kubenswrapper[5103]: I0130 00:13:45.634436 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ce63351-9fca-4e0e-b4fb-3032a983ebcc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.743824 5103 generic.go:358] "Generic (PLEG): container finished" podID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" exitCode=0 Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.743943 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085"} Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.749470 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj2cx" event={"ID":"3ce63351-9fca-4e0e-b4fb-3032a983ebcc","Type":"ContainerDied","Data":"d31bb6a2f9fb799d1f7776dc6dbb0a5dcdd009e2858db6301a056354672735ba"} Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.749878 5103 scope.go:117] "RemoveContainer" containerID="61b7e3d7bda45ccba276e3b0cd735aefbb85c50cc41a38a5806aa5fdbd4ac483" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.749555 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj2cx" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.771694 5103 scope.go:117] "RemoveContainer" containerID="b3c22393fbe801c108dcbbddf3bbfaff0479ecc6408293676bc4b5895feac0f7" Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.799478 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:13:46 crc kubenswrapper[5103]: I0130 00:13:46.807524 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qj2cx"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.969455 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.970438 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.971095 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:46 crc kubenswrapper[5103]: E0130 00:13:46.971221 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bhpd7" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" probeResult="unknown" Jan 30 00:13:47 crc kubenswrapper[5103]: I0130 00:13:47.194859 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" path="/var/lib/kubelet/pods/3ce63351-9fca-4e0e-b4fb-3032a983ebcc/volumes" Jan 30 00:13:47 crc kubenswrapper[5103]: I0130 00:13:47.718447 5103 scope.go:117] "RemoveContainer" containerID="650d7faa5f4f892e52058d54951c121f7eb03b49005bdf02d2d0dcbf11476748" Jan 30 00:13:47 crc kubenswrapper[5103]: I0130 00:13:47.736881 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.018021 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.018208 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.508209 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bhpd7_096edab0-9031-4bcd-8451-a93417372ee1/registry-server/0.log" Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.510007 5103 generic.go:358] "Generic (PLEG): container finished" podID="096edab0-9031-4bcd-8451-a93417372ee1" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" exitCode=-1 Jan 30 00:13:51 crc kubenswrapper[5103]: I0130 00:13:51.510125 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a"} Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.776334 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.776674 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.983550 5103 trace.go:236] Trace[1622085638]: "Calculate volume metrics of trusted-ca for pod openshift-ingress-operator/ingress-operator-6b9cb4dbcf-knxwb" (30-Jan-2026 00:13:51.665) (total time: 1318ms): Jan 30 00:13:52 crc kubenswrapper[5103]: Trace[1622085638]: [1.318488432s] [1.318488432s] END Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.993199 5103 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994405 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994451 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994500 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994514 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994537 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994551 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994571 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994584 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994601 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994613 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994640 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994652 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="extract-utilities" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994675 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerName="pruner" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994686 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerName="pruner" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994732 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994745 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="extract-content" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994953 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1617c52-82bc-4480-9bc4-e37e0264876e" containerName="kube-multus-additional-cni-plugins" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.994993 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="a8e87128-3548-4aa6-97ae-4fbdebabb51b" containerName="pruner" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.995015 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ce63351-9fca-4e0e-b4fb-3032a983ebcc" containerName="registry-server" Jan 30 00:13:52 crc kubenswrapper[5103]: I0130 00:13:52.995036 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="faf9931f-40f0-4d66-b375-89bec91fd6b8" containerName="registry-server" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.787546 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bhpd7_096edab0-9031-4bcd-8451-a93417372ee1/registry-server/0.log" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.789567 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.857750 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") pod \"096edab0-9031-4bcd-8451-a93417372ee1\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.857934 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") pod \"096edab0-9031-4bcd-8451-a93417372ee1\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.858116 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") pod \"096edab0-9031-4bcd-8451-a93417372ee1\" (UID: \"096edab0-9031-4bcd-8451-a93417372ee1\") " Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.861105 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities" (OuterVolumeSpecName: "utilities") pod "096edab0-9031-4bcd-8451-a93417372ee1" (UID: "096edab0-9031-4bcd-8451-a93417372ee1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.870768 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl" (OuterVolumeSpecName: "kube-api-access-6nnsl") pod "096edab0-9031-4bcd-8451-a93417372ee1" (UID: "096edab0-9031-4bcd-8451-a93417372ee1"). InnerVolumeSpecName "kube-api-access-6nnsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.961446 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:53 crc kubenswrapper[5103]: I0130 00:13:53.961504 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6nnsl\" (UniqueName: \"kubernetes.io/projected/096edab0-9031-4bcd-8451-a93417372ee1-kube-api-access-6nnsl\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:54 crc kubenswrapper[5103]: I0130 00:13:54.041511 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bhpd7_096edab0-9031-4bcd-8451-a93417372ee1/registry-server/0.log" Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.939345 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.939885 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.940747 5103 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 00:13:54 crc kubenswrapper[5103]: E0130 00:13:54.940787 5103 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xpqb7" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" probeResult="unknown" Jan 30 00:13:54 crc kubenswrapper[5103]: I0130 00:13:54.991907 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.081838 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") pod \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.081939 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") pod \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.082109 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") pod \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\" (UID: \"3d4d4fce-00ed-4163-8a52-864aa4d324e6\") " Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.083739 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities" (OuterVolumeSpecName: "utilities") pod "3d4d4fce-00ed-4163-8a52-864aa4d324e6" (UID: "3d4d4fce-00ed-4163-8a52-864aa4d324e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.088740 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp" (OuterVolumeSpecName: "kube-api-access-f77zp") pod "3d4d4fce-00ed-4163-8a52-864aa4d324e6" (UID: "3d4d4fce-00ed-4163-8a52-864aa4d324e6"). InnerVolumeSpecName "kube-api-access-f77zp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.092808 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d4d4fce-00ed-4163-8a52-864aa4d324e6" (UID: "3d4d4fce-00ed-4163-8a52-864aa4d324e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.183659 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.183707 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f77zp\" (UniqueName: \"kubernetes.io/projected/3d4d4fce-00ed-4163-8a52-864aa4d324e6-kube-api-access-f77zp\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:55 crc kubenswrapper[5103]: I0130 00:13:55.183719 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d4d4fce-00ed-4163-8a52-864aa4d324e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:56 crc kubenswrapper[5103]: I0130 00:13:56.450003 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "096edab0-9031-4bcd-8451-a93417372ee1" (UID: "096edab0-9031-4bcd-8451-a93417372ee1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:13:56 crc kubenswrapper[5103]: I0130 00:13:56.502006 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/096edab0-9031-4bcd-8451-a93417372ee1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:13:58 crc kubenswrapper[5103]: I0130 00:13:58.493711 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:13:58 crc kubenswrapper[5103]: I0130 00:13:58.494343 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:13:59 crc kubenswrapper[5103]: I0130 00:13:59.091176 5103 generic.go:358] "Generic (PLEG): container finished" podID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerID="14c110c2aafcebf401f14c4e8482618b6d3c8697a12a7383624870029d5a39de" exitCode=0 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.017361 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.017462 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609456 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhpd7" event={"ID":"096edab0-9031-4bcd-8451-a93417372ee1","Type":"ContainerDied","Data":"d4066f004b894aa275e8f17ab459177453c90a0d285ce8521d3e860edb7bf0cf"} Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609594 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpqb7" event={"ID":"3d4d4fce-00ed-4163-8a52-864aa4d324e6","Type":"ContainerDied","Data":"7644a5d832213c30e73c7160330a4d5f9a395115e8e0a49061670b16a87be474"} Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609666 5103 scope.go:117] "RemoveContainer" containerID="3b1d472b67c873466ab1afef35e0dbbbb1752efd96c00a5c4687402ad6978e4a" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609696 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpqb7" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.609696 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhpd7" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.610874 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.629636 5103 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.629690 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerDied","Data":"14c110c2aafcebf401f14c4e8482618b6d3c8697a12a7383624870029d5a39de"} Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.629714 5103 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630369 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630434 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630569 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630596 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.630647 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be" gracePeriod=15 Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631603 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631623 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631640 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631646 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631655 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631661 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631670 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631676 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631687 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631694 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631700 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631706 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631714 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631719 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-content" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631726 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631731 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631737 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631744 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631753 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631758 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631765 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631770 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631782 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631820 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631829 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631834 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631843 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631848 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631853 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631858 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631865 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631870 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="extract-utilities" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631967 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631983 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.631990 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632000 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632009 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632019 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632026 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632032 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="096edab0-9031-4bcd-8451-a93417372ee1" containerName="registry-server" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632040 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632060 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.632067 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.662257 5103 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.675025 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694455 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694517 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694550 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.694735 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.695604 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797656 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797736 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797793 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797812 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797870 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797916 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797935 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797974 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.797976 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.798013 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:01 crc kubenswrapper[5103]: I0130 00:14:01.987575 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.642374 5103 scope.go:117] "RemoveContainer" containerID="ed2bfc2b73398c1889d5a77eddd6e0ef71fb44a17294819baf27f50345e4955f" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.775555 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.775656 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.839212 5103 scope.go:117] "RemoveContainer" containerID="e6a329d39509762784caccc32b4323411f00c0a9bfd035635c251413ddb2d332" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.857362 5103 scope.go:117] "RemoveContainer" containerID="372a5eafdedfb6dff7c8337daef4475372ba0caf3aff027012516e0f21614085" Jan 30 00:14:02 crc kubenswrapper[5103]: E0130 00:14:02.859165 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59eaa22b02be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,LastTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.872361 5103 scope.go:117] "RemoveContainer" containerID="abfd0b471685a353ec69c6889c8ef870bc8a246f713d8c033c6ef4c6cea8cbc2" Jan 30 00:14:02 crc kubenswrapper[5103]: I0130 00:14:02.888513 5103 scope.go:117] "RemoveContainer" containerID="487aacb9ba75fd28f520f3d4a32a82a1b33516035610efccfd2d8baacd805ff1" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.116451 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.128746 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.128802 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpqb7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.129965 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.131557 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.132500 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be" exitCode=2 Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.134536 5103 generic.go:358] "Generic (PLEG): container finished" podID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerID="7d8a9b02754e84af20022228fe1cad64203a3b40b3a8196d40d777d92317e4f3" exitCode=0 Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220488 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220560 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220588 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220613 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.220912 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.306197 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.306796 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322330 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322383 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322401 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322425 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322472 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322488 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322535 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.322546 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.323154 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.323169 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.423686 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") pod \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.424106 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") pod \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\" (UID: \"c5938973-a6f9-4d60-b605-3f02b2c1c84f\") " Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.425001 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca" (OuterVolumeSpecName: "serviceca") pod "c5938973-a6f9-4d60-b605-3f02b2c1c84f" (UID: "c5938973-a6f9-4d60-b605-3f02b2c1c84f"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.429834 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx" (OuterVolumeSpecName: "kube-api-access-t2gvx") pod "c5938973-a6f9-4d60-b605-3f02b2c1c84f" (UID: "c5938973-a6f9-4d60-b605-3f02b2c1c84f"). InnerVolumeSpecName "kube-api-access-t2gvx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.526100 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2gvx\" (UniqueName: \"kubernetes.io/projected/c5938973-a6f9-4d60-b605-3f02b2c1c84f-kube-api-access-t2gvx\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.526182 5103 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c5938973-a6f9-4d60-b605-3f02b2c1c84f-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.867856 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"8e3ac715b91fddae359b350cd88496ad1a437748990a5e54da482342c811ef9d"} Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.867928 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerDied","Data":"7d8a9b02754e84af20022228fe1cad64203a3b40b3a8196d40d777d92317e4f3"} Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.867999 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.868026 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.868108 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bhpd7"] Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870464 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870517 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870742 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0"} pod="openshift-console/downloads-747b44746d-j77tr" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.870795 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" containerID="cri-o://59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0" gracePeriod=2 Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.871933 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.872366 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:03 crc kubenswrapper[5103]: I0130 00:14:03.872697 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.143987 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.145989 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146682 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5" exitCode=0 Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146704 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6" exitCode=0 Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146712 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e" exitCode=0 Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.146763 5103 scope.go:117] "RemoveContainer" containerID="c4970ba0698267ac627cb02083bf4fbc02b06c7867996378a60df6b75a642a6b" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.149771 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29495520-x6t57" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.150608 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29495520-x6t57" event={"ID":"c5938973-a6f9-4d60-b605-3f02b2c1c84f","Type":"ContainerDied","Data":"f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0"} Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.150646 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f110469e2ef62c0b54ea25d9e9c5273b55bbc9a77eb25e1ad48e65441633b3d0" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.174021 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.174624 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.174804 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.430898 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.431888 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.432403 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.432582 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540492 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") pod \"36d0743a-ddce-4bd2-8cca-44d42d9356da\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540596 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") pod \"36d0743a-ddce-4bd2-8cca-44d42d9356da\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540627 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") pod \"36d0743a-ddce-4bd2-8cca-44d42d9356da\" (UID: \"36d0743a-ddce-4bd2-8cca-44d42d9356da\") " Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540657 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36d0743a-ddce-4bd2-8cca-44d42d9356da" (UID: "36d0743a-ddce-4bd2-8cca-44d42d9356da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540744 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock" (OuterVolumeSpecName: "var-lock") pod "36d0743a-ddce-4bd2-8cca-44d42d9356da" (UID: "36d0743a-ddce-4bd2-8cca-44d42d9356da"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540863 5103 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.540874 5103 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36d0743a-ddce-4bd2-8cca-44d42d9356da-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.549190 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36d0743a-ddce-4bd2-8cca-44d42d9356da" (UID: "36d0743a-ddce-4bd2-8cca-44d42d9356da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.641837 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36d0743a-ddce-4bd2-8cca-44d42d9356da-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.874543 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096edab0-9031-4bcd-8451-a93417372ee1" path="/var/lib/kubelet/pods/096edab0-9031-4bcd-8451-a93417372ee1/volumes" Jan 30 00:14:04 crc kubenswrapper[5103]: I0130 00:14:04.875479 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4d4fce-00ed-4163-8a52-864aa4d324e6" path="/var/lib/kubelet/pods/3d4d4fce-00ed-4163-8a52-864aa4d324e6/volumes" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.155973 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"36d0743a-ddce-4bd2-8cca-44d42d9356da","Type":"ContainerDied","Data":"d26e5dfec08469fb58fbea2e743d80f53ef5ef562fb067297ab9ef35b80c7464"} Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.156017 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d26e5dfec08469fb58fbea2e743d80f53ef5ef562fb067297ab9ef35b80c7464" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.156142 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.157962 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7"} Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.159707 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.159835 5103 generic.go:358] "Generic (PLEG): container finished" podID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerID="59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0" exitCode=0 Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.159893 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerDied","Data":"59c82ec1806032dbe373182f75befd328452c9166c192bca363da6c02d99e1c0"} Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.160258 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.160576 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:05 crc kubenswrapper[5103]: I0130 00:14:05.270980 5103 scope.go:117] "RemoveContainer" containerID="fb88c4f3139f356c291358cb8aa4fa4cf78be8f2c5f1ebfdbfda9547ff3a84f8" Jan 30 00:14:06 crc kubenswrapper[5103]: I0130 00:14:06.168324 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:06 crc kubenswrapper[5103]: I0130 00:14:06.171324 5103 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049" exitCode=0 Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.432152 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.432651 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.432960 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.433237 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.433489 5103 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:06 crc kubenswrapper[5103]: I0130 00:14:06.433516 5103 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.433805 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Jan 30 00:14:06 crc kubenswrapper[5103]: E0130 00:14:06.634873 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Jan 30 00:14:07 crc kubenswrapper[5103]: E0130 00:14:07.035975 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Jan 30 00:14:07 crc kubenswrapper[5103]: I0130 00:14:07.177185 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:07 crc kubenswrapper[5103]: I0130 00:14:07.178747 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:07 crc kubenswrapper[5103]: I0130 00:14:07.179558 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:07 crc kubenswrapper[5103]: E0130 00:14:07.837196 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.019277 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.028165 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.028909 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.029241 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.029501 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.029733 5103 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.102668 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103078 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103235 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103276 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103320 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103429 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103438 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103494 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103500 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103621 5103 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103637 5103 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103650 5103 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.103662 5103 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.108321 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.196030 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.197617 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.197634 5103 scope.go:117] "RemoveContainer" containerID="7db8fb50f766d64858fb9c23c921f7327de27610f6bcaf84791914b161dde1c5" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.205775 5103 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.219514 5103 scope.go:117] "RemoveContainer" containerID="ba9d5e489e32ade00a7ed9f23881e724f12ad2a4f46c7eb2d7ffa428b1ed46e6" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.226373 5103 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.226863 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.227273 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.227702 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.245661 5103 scope.go:117] "RemoveContainer" containerID="8b99bd4ed442a6e893b8ffefa0d3e0262ea21aacb535efa1ccaae79c2b0df15e" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.270806 5103 scope.go:117] "RemoveContainer" containerID="bd61d622b20055e78631cc006b33331822b41cce911774790ecad8fd65b7c1be" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.294014 5103 scope.go:117] "RemoveContainer" containerID="b153c88a509956373202d01b7898f69889fe914c89d446857bbb41bcb03e5049" Jan 30 00:14:09 crc kubenswrapper[5103]: I0130 00:14:09.311586 5103 scope.go:117] "RemoveContainer" containerID="f528f7e2ed52e0e4fef42df268e6aeef6640302bfd1e89674fd3b5580c8b0be2" Jan 30 00:14:09 crc kubenswrapper[5103]: E0130 00:14:09.438748 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.875266 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.876138 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.876408 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.876876 5103 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:10 crc kubenswrapper[5103]: I0130 00:14:10.882800 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.219098 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-j77tr" event={"ID":"5f40ccbb-715c-4854-b28f-ab8055375c91","Type":"ContainerStarted","Data":"64472619026cbbe379251178003f955b4bb2a1307cb8e228ed55293d739ed29b"} Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.219466 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.220083 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.220188 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.220552 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.221473 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.222291 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: I0130 00:14:11.222825 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:11 crc kubenswrapper[5103]: E0130 00:14:11.230508 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59eaa22b02be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,LastTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.226563 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.226975 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:12 crc kubenswrapper[5103]: E0130 00:14:12.639978 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="6.4s" Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.775522 5103 patch_prober.go:28] interesting pod/downloads-747b44746d-j77tr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 00:14:12 crc kubenswrapper[5103]: I0130 00:14:12.775983 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-j77tr" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.252372 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.253279 5103 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9" exitCode=1 Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.253555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9"} Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.254325 5103 scope.go:117] "RemoveContainer" containerID="b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.255515 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.256472 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.257009 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.257308 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:16 crc kubenswrapper[5103]: I0130 00:14:16.257584 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.263194 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.263733 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e"} Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.265941 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.266654 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.266944 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.267242 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.267515 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.968639 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.969531 5103 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:17 crc kubenswrapper[5103]: I0130 00:14:17.969627 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.867735 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.869590 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.870232 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.870716 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.871090 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.871343 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.894360 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.894540 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:18 crc kubenswrapper[5103]: E0130 00:14:18.895815 5103 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:18 crc kubenswrapper[5103]: I0130 00:14:18.896166 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:18 crc kubenswrapper[5103]: W0130 00:14:18.933703 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456 WatchSource:0}: Error finding container d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456: Status 404 returned error can't find the container with id d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456 Jan 30 00:14:19 crc kubenswrapper[5103]: E0130 00:14:19.041403 5103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="7s" Jan 30 00:14:19 crc kubenswrapper[5103]: I0130 00:14:19.277009 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d1a5e3f028a3221d25f342a9bd39773b7ea726fd084bb32f4f610d07eadcb456"} Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874267 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874461 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874601 5103 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874752 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.874900 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:20 crc kubenswrapper[5103]: I0130 00:14:20.876274 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: E0130 00:14:21.232102 5103 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f59eaa22b02be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,LastTimestamp:2026-01-30 00:14:02.857841342 +0000 UTC m=+232.729339394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.315932 5103 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="7dadfa91d753626cf3d7e8b197d0f960f5f2ec28a1a89374b78494a4c475e0ae" exitCode=0 Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.316263 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"7dadfa91d753626cf3d7e8b197d0f960f5f2ec28a1a89374b78494a4c475e0ae"} Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.316978 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317212 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317426 5103 status_manager.go:895] "Failed to get status for pod" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317641 5103 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.317851 5103 status_manager.go:895] "Failed to get status for pod" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" pod="openshift-image-registry/image-pruner-29495520-x6t57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29495520-x6t57\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.318067 5103 status_manager.go:895] "Failed to get status for pod" podUID="5f40ccbb-715c-4854-b28f-ab8055375c91" pod="openshift-console/downloads-747b44746d-j77tr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-j77tr\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.318277 5103 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.318444 5103 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Jan 30 00:14:21 crc kubenswrapper[5103]: E0130 00:14:21.318704 5103 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:21 crc kubenswrapper[5103]: I0130 00:14:21.465239 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.234370 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-j77tr" Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.331220 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1fb3571660e971b359d4c340d7be1878c55ae50327a9e7819b8f25b365fbe66b"} Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.331555 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"954b9125e1be33a5e6eb4f89b7a006597732e602e95c91c596117e1751526b2f"} Jan 30 00:14:22 crc kubenswrapper[5103]: I0130 00:14:22.331567 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7796ff097def23d28226f770ba3c77a19d674857edfe45de2559f8735742b4fc"} Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339273 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d72f5041a0dc8f1ae006d43bcc632dd103a9b31a2c2af22496cbdc44ca692d27"} Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339646 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"bad432e1280ca0bc9081f7baeb83400a8df530fd4427cc1249e99d70a3beed7c"} Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339949 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.339964 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.340231 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.896737 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.896791 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.902413 5103 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]log ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]etcd ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-filter ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-informers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-apiextensions-controllers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/crd-informer-synced ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-system-namespaces-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 30 00:14:23 crc kubenswrapper[5103]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/bootstrap-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/start-kube-aggregator-informers ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-registration-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-discovery-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]autoregister-completion ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapi-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 30 00:14:23 crc kubenswrapper[5103]: livez check failed Jan 30 00:14:23 crc kubenswrapper[5103]: I0130 00:14:23.902484 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57755cc5f99000cc11e193051474d4e2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 00:14:27 crc kubenswrapper[5103]: I0130 00:14:27.969945 5103 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:27 crc kubenswrapper[5103]: I0130 00:14:27.970701 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.349373 5103 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.349407 5103 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.369069 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.369101 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.493401 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.493472 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.902734 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:28 crc kubenswrapper[5103]: I0130 00:14:28.906484 5103 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="aac8e230-ac35-4811-a9b5-f24f8f62bb06" Jan 30 00:14:29 crc kubenswrapper[5103]: I0130 00:14:29.373965 5103 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:29 crc kubenswrapper[5103]: I0130 00:14:29.373995 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dac48a-8ec1-4c4a-a2d9-c3a1567645a2" Jan 30 00:14:30 crc kubenswrapper[5103]: I0130 00:14:30.898603 5103 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="aac8e230-ac35-4811-a9b5-f24f8f62bb06" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.969487 5103 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.969913 5103 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.969972 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.970956 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 30 00:14:37 crc kubenswrapper[5103]: I0130 00:14:37.971094 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e" gracePeriod=30 Jan 30 00:14:38 crc kubenswrapper[5103]: I0130 00:14:38.755032 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 30 00:14:38 crc kubenswrapper[5103]: I0130 00:14:38.853099 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.127517 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.233738 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.716718 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.885212 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.889620 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.984648 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:39 crc kubenswrapper[5103]: I0130 00:14:39.996774 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 30 00:14:40 crc kubenswrapper[5103]: I0130 00:14:40.072603 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 30 00:14:40 crc kubenswrapper[5103]: I0130 00:14:40.091318 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.043269 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.091110 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.236857 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.328872 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.398374 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.532571 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.558823 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.697345 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 30 00:14:41 crc kubenswrapper[5103]: I0130 00:14:41.851813 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.156965 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.249023 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.421019 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.549958 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.732790 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.872578 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.947811 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 30 00:14:42 crc kubenswrapper[5103]: I0130 00:14:42.994986 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.053427 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.062547 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.181544 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.232023 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.327880 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.434077 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.488485 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.530359 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.622784 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.717239 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.770738 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.805396 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 30 00:14:43 crc kubenswrapper[5103]: I0130 00:14:43.826894 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.042276 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.142278 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.245134 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.617405 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.713301 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.775799 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.929340 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 30 00:14:44 crc kubenswrapper[5103]: I0130 00:14:44.980097 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.261830 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.423310 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.429252 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.497157 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.622183 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.652924 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.793333 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 30 00:14:45 crc kubenswrapper[5103]: I0130 00:14:45.897373 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.020181 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.116725 5103 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.159648 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.216872 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.431748 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.461864 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.530636 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.594721 5103 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.596442 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.596414389 podStartE2EDuration="45.596414389s" podCreationTimestamp="2026-01-30 00:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:28.237810024 +0000 UTC m=+258.109308086" watchObservedRunningTime="2026-01-30 00:14:46.596414389 +0000 UTC m=+276.467912471" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.602496 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.602569 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.608146 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.609777 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.626960 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.626943364 podStartE2EDuration="18.626943364s" podCreationTimestamp="2026-01-30 00:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:14:46.622953267 +0000 UTC m=+276.494451329" watchObservedRunningTime="2026-01-30 00:14:46.626943364 +0000 UTC m=+276.498441416" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.659222 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.803660 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 30 00:14:46 crc kubenswrapper[5103]: I0130 00:14:46.974400 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.006383 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.125514 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.300555 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.300555 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.321665 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.329830 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.401239 5103 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.425823 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.537255 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.681493 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 30 00:14:47 crc kubenswrapper[5103]: I0130 00:14:47.865595 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.156499 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.192040 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.334685 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.362453 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.377782 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.530955 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 30 00:14:48 crc kubenswrapper[5103]: I0130 00:14:48.587362 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.047607 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.050122 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.170083 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.173889 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.228674 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.267593 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.308551 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.335107 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.346469 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.394133 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.595750 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.615607 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.675529 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.694316 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.736345 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.820792 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.899508 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 30 00:14:49 crc kubenswrapper[5103]: I0130 00:14:49.991134 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.137359 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.150917 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.244409 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.386400 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.419410 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.423140 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.452660 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.494899 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.749634 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.782277 5103 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.782550 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" gracePeriod=5 Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.905857 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 30 00:14:50 crc kubenswrapper[5103]: I0130 00:14:50.928475 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.212585 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.244939 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.264567 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.383680 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.422441 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.462317 5103 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.463568 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.479295 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.678967 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 30 00:14:51 crc kubenswrapper[5103]: I0130 00:14:51.719390 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.137218 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.142486 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.225779 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.314288 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.352468 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.543777 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.646868 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.714757 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.740813 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 30 00:14:52 crc kubenswrapper[5103]: I0130 00:14:52.761667 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.045299 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.112321 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.183785 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.284585 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.365301 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 30 00:14:53 crc kubenswrapper[5103]: I0130 00:14:53.691041 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.006568 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.058757 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.081443 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.423719 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.687140 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:14:54 crc kubenswrapper[5103]: I0130 00:14:54.780416 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.398265 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.398399 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.542884 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.542997 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543189 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543247 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543304 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543808 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543890 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543936 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.543986 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.560706 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572504 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572582 5103 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" exitCode=137 Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572792 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.572873 5103 scope.go:117] "RemoveContainer" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.627557 5103 scope.go:117] "RemoveContainer" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" Jan 30 00:14:56 crc kubenswrapper[5103]: E0130 00:14:56.628409 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7\": container with ID starting with b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7 not found: ID does not exist" containerID="b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.628482 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7"} err="failed to get container status \"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7\": rpc error: code = NotFound desc = could not find container \"b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7\": container with ID starting with b3e5aa3a711891983378f088ed5e005f0b72f7d2f94daa2f8d5756dfac3ba4f7 not found: ID does not exist" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645236 5103 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645265 5103 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645278 5103 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645289 5103 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.645300 5103 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.876481 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.876808 5103 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.892614 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.892713 5103 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bbd98a8a-8e00-459e-9b14-f5fbde204275" Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.900218 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 00:14:56 crc kubenswrapper[5103]: I0130 00:14:56.900265 5103 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bbd98a8a-8e00-459e-9b14-f5fbde204275" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.493737 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.494247 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.494339 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.495333 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.495474 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb" gracePeriod=600 Jan 30 00:14:58 crc kubenswrapper[5103]: I0130 00:14:58.911704 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 30 00:14:59 crc kubenswrapper[5103]: I0130 00:14:59.598481 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb" exitCode=0 Jan 30 00:14:59 crc kubenswrapper[5103]: I0130 00:14:59.598598 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb"} Jan 30 00:14:59 crc kubenswrapper[5103]: I0130 00:14:59.599202 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590"} Jan 30 00:15:03 crc kubenswrapper[5103]: I0130 00:15:03.698168 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.022676 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.669681 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.797528 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 30 00:15:05 crc kubenswrapper[5103]: I0130 00:15:05.901143 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 30 00:15:06 crc kubenswrapper[5103]: I0130 00:15:06.131535 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 30 00:15:06 crc kubenswrapper[5103]: I0130 00:15:06.874802 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 30 00:15:07 crc kubenswrapper[5103]: I0130 00:15:07.124667 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 30 00:15:07 crc kubenswrapper[5103]: I0130 00:15:07.206086 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 30 00:15:07 crc kubenswrapper[5103]: I0130 00:15:07.869700 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.658733 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660564 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660645 5103 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e" exitCode=137 Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660730 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"ae3e0e08ed67c62331b46cf2074f1b215dcca7fcb0af2347d9529fb8ff3ab82e"} Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.660779 5103 scope.go:117] "RemoveContainer" containerID="b5d572d467af4ce53951d7663f3bcc31042f84896e2a111cea416490836872d9" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.746649 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 30 00:15:08 crc kubenswrapper[5103]: I0130 00:15:08.978229 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.062697 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.253632 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.672663 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.675587 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6951b3d44456fd644dddb08caa9fe5616204189d4cb5d7fcafe82ceb45b4bc6a"} Jan 30 00:15:09 crc kubenswrapper[5103]: I0130 00:15:09.678379 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 30 00:15:10 crc kubenswrapper[5103]: I0130 00:15:10.083553 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 30 00:15:10 crc kubenswrapper[5103]: I0130 00:15:10.901028 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.081193 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.085970 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.187941 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.253740 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.464927 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.549601 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.644385 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 30 00:15:11 crc kubenswrapper[5103]: I0130 00:15:11.974831 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.161853 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.594846 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.643163 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.736213 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:12 crc kubenswrapper[5103]: I0130 00:15:12.744526 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 30 00:15:13 crc kubenswrapper[5103]: I0130 00:15:13.164994 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 30 00:15:13 crc kubenswrapper[5103]: I0130 00:15:13.268310 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:15:13 crc kubenswrapper[5103]: I0130 00:15:13.632921 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 30 00:15:14 crc kubenswrapper[5103]: I0130 00:15:14.323033 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 30 00:15:14 crc kubenswrapper[5103]: I0130 00:15:14.684804 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 30 00:15:14 crc kubenswrapper[5103]: I0130 00:15:14.716247 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 30 00:15:15 crc kubenswrapper[5103]: I0130 00:15:15.681615 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 30 00:15:15 crc kubenswrapper[5103]: I0130 00:15:15.791532 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 30 00:15:16 crc kubenswrapper[5103]: I0130 00:15:16.430309 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:16 crc kubenswrapper[5103]: I0130 00:15:16.594397 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.595969 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.704293 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.741957 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.744211 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.793737 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.969186 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:17 crc kubenswrapper[5103]: I0130 00:15:17.974436 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.016955 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.200237 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.426291 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 30 00:15:18 crc kubenswrapper[5103]: I0130 00:15:18.745829 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.075711 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.316870 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.653423 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 30 00:15:19 crc kubenswrapper[5103]: I0130 00:15:19.993285 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.157741 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.226537 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.423791 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 30 00:15:20 crc kubenswrapper[5103]: I0130 00:15:20.912391 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.162268 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.282794 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.317410 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.749565 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.764562 5103 generic.go:358] "Generic (PLEG): container finished" podID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerID="9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec" exitCode=0 Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.764603 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec"} Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.765422 5103 scope.go:117] "RemoveContainer" containerID="9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec" Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.769190 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:15:21 crc kubenswrapper[5103]: I0130 00:15:21.853185 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.070313 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.196563 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.325220 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.461306 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.603615 5103 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.772794 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.773853 5103 generic.go:358] "Generic (PLEG): container finished" podID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" exitCode=1 Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.773930 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0"} Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.774080 5103 scope.go:117] "RemoveContainer" containerID="9d8321d26701e84b2172ecd6b861bb6b29cc5de963380d89851b5cc503a53bec" Jan 30 00:15:22 crc kubenswrapper[5103]: I0130 00:15:22.774987 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:22 crc kubenswrapper[5103]: E0130 00:15:22.775979 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-mf247_openshift-marketplace(b15f695a-0fc1-4ab5-aad2-341f3bf6822d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.044185 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.172610 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.431616 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.595512 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.672672 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.784414 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.912525 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 30 00:15:23 crc kubenswrapper[5103]: I0130 00:15:23.941978 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.002693 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.183043 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.365240 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.645538 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 30 00:15:24 crc kubenswrapper[5103]: I0130 00:15:24.729604 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 30 00:15:25 crc kubenswrapper[5103]: I0130 00:15:25.196034 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 30 00:15:25 crc kubenswrapper[5103]: I0130 00:15:25.644637 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 30 00:15:25 crc kubenswrapper[5103]: I0130 00:15:25.954293 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.010211 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.393421 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.589623 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.617708 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.644229 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.646673 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:26 crc kubenswrapper[5103]: E0130 00:15:26.647172 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-mf247_openshift-marketplace(b15f695a-0fc1-4ab5-aad2-341f3bf6822d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" Jan 30 00:15:26 crc kubenswrapper[5103]: I0130 00:15:26.733396 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.187418 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.391262 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz"] Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392373 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerName="installer" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392399 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerName="installer" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392413 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerName="image-pruner" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392421 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerName="image-pruner" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392467 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392476 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392593 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="36d0743a-ddce-4bd2-8cca-44d42d9356da" containerName="installer" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392607 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="c5938973-a6f9-4d60-b605-3f02b2c1c84f" containerName="image-pruner" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.392618 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.402677 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz"] Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.402839 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.404751 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.404828 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.500387 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.500599 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.500675 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.601940 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.602017 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.602138 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.602828 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.609246 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.618933 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"collect-profiles-29495535-2v5qz\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:27 crc kubenswrapper[5103]: I0130 00:15:27.723341 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.090720 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.195586 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz"] Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.200734 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.345883 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.347017 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:28 crc kubenswrapper[5103]: E0130 00:15:28.347552 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-547dbd544d-mf247_openshift-marketplace(b15f695a-0fc1-4ab5-aad2-341f3bf6822d)\"" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.743428 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.823027 5103 generic.go:358] "Generic (PLEG): container finished" podID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerID="6c5ceedcb3e34d36eeae0f5ae68862363cc7dc5fe8f4f10ce0e542de91be2cc6" exitCode=0 Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.823160 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" event={"ID":"325216c1-422b-4a9d-ab9b-fcc433fe43b8","Type":"ContainerDied","Data":"6c5ceedcb3e34d36eeae0f5ae68862363cc7dc5fe8f4f10ce0e542de91be2cc6"} Jan 30 00:15:28 crc kubenswrapper[5103]: I0130 00:15:28.823451 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" event={"ID":"325216c1-422b-4a9d-ab9b-fcc433fe43b8","Type":"ContainerStarted","Data":"d101796d8c8dc1502643088fc37363d125d3aea5b84c5917aafbf53dfee80956"} Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.133302 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.388155 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.667611 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 30 00:15:29 crc kubenswrapper[5103]: I0130 00:15:29.993595 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.039864 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.053630 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.089088 5103 ???:1] "http: TLS handshake error from 192.168.126.11:38444: no serving certificate available for the kubelet" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.132387 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") pod \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.132437 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") pod \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.132478 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") pod \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\" (UID: \"325216c1-422b-4a9d-ab9b-fcc433fe43b8\") " Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.133082 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "325216c1-422b-4a9d-ab9b-fcc433fe43b8" (UID: "325216c1-422b-4a9d-ab9b-fcc433fe43b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.139934 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "325216c1-422b-4a9d-ab9b-fcc433fe43b8" (UID: "325216c1-422b-4a9d-ab9b-fcc433fe43b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.140459 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp" (OuterVolumeSpecName: "kube-api-access-wpnlp") pod "325216c1-422b-4a9d-ab9b-fcc433fe43b8" (UID: "325216c1-422b-4a9d-ab9b-fcc433fe43b8"). InnerVolumeSpecName "kube-api-access-wpnlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.233931 5103 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/325216c1-422b-4a9d-ab9b-fcc433fe43b8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.233969 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wpnlp\" (UniqueName: \"kubernetes.io/projected/325216c1-422b-4a9d-ab9b-fcc433fe43b8-kube-api-access-wpnlp\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.233977 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/325216c1-422b-4a9d-ab9b-fcc433fe43b8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.706070 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.740151 5103 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.835934 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" event={"ID":"325216c1-422b-4a9d-ab9b-fcc433fe43b8","Type":"ContainerDied","Data":"d101796d8c8dc1502643088fc37363d125d3aea5b84c5917aafbf53dfee80956"} Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.836333 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d101796d8c8dc1502643088fc37363d125d3aea5b84c5917aafbf53dfee80956" Jan 30 00:15:30 crc kubenswrapper[5103]: I0130 00:15:30.835980 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495535-2v5qz" Jan 30 00:15:31 crc kubenswrapper[5103]: I0130 00:15:31.002125 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 30 00:15:31 crc kubenswrapper[5103]: I0130 00:15:31.443908 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:15:31 crc kubenswrapper[5103]: I0130 00:15:31.607533 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.201585 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.275379 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.400849 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 30 00:15:32 crc kubenswrapper[5103]: I0130 00:15:32.401156 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 30 00:15:33 crc kubenswrapper[5103]: I0130 00:15:33.190174 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 30 00:15:33 crc kubenswrapper[5103]: I0130 00:15:33.473493 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 30 00:15:34 crc kubenswrapper[5103]: I0130 00:15:34.610433 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 30 00:15:34 crc kubenswrapper[5103]: I0130 00:15:34.823517 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 30 00:15:34 crc kubenswrapper[5103]: I0130 00:15:34.941429 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 30 00:15:42 crc kubenswrapper[5103]: I0130 00:15:42.868147 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.939387 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.939816 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerStarted","Data":"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1"} Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.940301 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:15:43 crc kubenswrapper[5103]: I0130 00:15:43.944147 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.641670 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.642554 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" containerID="cri-o://8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" gracePeriod=30 Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.659425 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:16:21 crc kubenswrapper[5103]: I0130 00:16:21.660107 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" containerID="cri-o://712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" gracePeriod=30 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.064490 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.073976 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093236 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57969d489d-xkzdh"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093939 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerName="collect-profiles" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093966 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerName="collect-profiles" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093990 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.093999 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094014 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094024 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094152 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094169 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerName="route-controller-manager" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.094178 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="325216c1-422b-4a9d-ab9b-fcc433fe43b8" containerName="collect-profiles" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.100252 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.102092 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57969d489d-xkzdh"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.113931 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.119535 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.128897 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194426 5103 generic.go:358] "Generic (PLEG): container finished" podID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" exitCode=0 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194509 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194580 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerDied","Data":"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194629 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" event={"ID":"d3abf3af-b96a-44fa-bd40-1c92bab19b92","Type":"ContainerDied","Data":"f01ae49c3dbf6ce1c41262f39b1cfb6c8326085cddd7aa8f645756c56fc66e24"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.194658 5103 scope.go:117] "RemoveContainer" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196039 5103 generic.go:358] "Generic (PLEG): container finished" podID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" exitCode=0 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196181 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerDied","Data":"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196205 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" event={"ID":"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204","Type":"ContainerDied","Data":"9131b9500cdfd415e7ec77b417734cc2ba2d9446de26cd67b54fba245814badb"} Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.196260 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205231 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205278 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205326 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205367 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205386 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205403 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205417 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205438 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205468 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205502 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") pod \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\" (UID: \"fee9c38e-5ed2-4ec7-9f3f-01f08bf09204\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205560 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") pod \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\" (UID: \"d3abf3af-b96a-44fa-bd40-1c92bab19b92\") " Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205652 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-config\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205689 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205713 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205741 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-client-ca\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205758 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76r7h\" (UniqueName: \"kubernetes.io/projected/158c1d70-030a-44de-b9af-51dafc4857f5-kube-api-access-76r7h\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205797 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-proxy-ca-bundles\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205815 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/158c1d70-030a-44de-b9af-51dafc4857f5-tmp\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205837 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205859 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205875 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.205895 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158c1d70-030a-44de-b9af-51dafc4857f5-serving-cert\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.207612 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.207795 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp" (OuterVolumeSpecName: "tmp") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.207971 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp" (OuterVolumeSpecName: "tmp") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208421 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config" (OuterVolumeSpecName: "config") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208468 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config" (OuterVolumeSpecName: "config") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208528 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.208573 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca" (OuterVolumeSpecName: "client-ca") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.211843 5103 scope.go:117] "RemoveContainer" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.213339 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.213427 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5" (OuterVolumeSpecName: "kube-api-access-qgvv5") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "kube-api-access-qgvv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.214858 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw" (OuterVolumeSpecName: "kube-api-access-4fxzw") pod "d3abf3af-b96a-44fa-bd40-1c92bab19b92" (UID: "d3abf3af-b96a-44fa-bd40-1c92bab19b92"). InnerVolumeSpecName "kube-api-access-4fxzw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: E0130 00:16:22.218937 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88\": container with ID starting with 8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88 not found: ID does not exist" containerID="8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.219001 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88"} err="failed to get container status \"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88\": rpc error: code = NotFound desc = could not find container \"8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88\": container with ID starting with 8f890bb3cf816a18897c43d1b183ec431c5d1433b9ec6f8203929686af9b4e88 not found: ID does not exist" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.219037 5103 scope.go:117] "RemoveContainer" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.220967 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" (UID: "fee9c38e-5ed2-4ec7-9f3f-01f08bf09204"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.247821 5103 scope.go:117] "RemoveContainer" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" Jan 30 00:16:22 crc kubenswrapper[5103]: E0130 00:16:22.248279 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9\": container with ID starting with 712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9 not found: ID does not exist" containerID="712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.248341 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9"} err="failed to get container status \"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9\": rpc error: code = NotFound desc = could not find container \"712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9\": container with ID starting with 712cb8d07a78e60ab00089d3821d7e8655cc4ddd2a076e41a81db33f438392d9 not found: ID does not exist" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307120 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307221 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-client-ca\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307253 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76r7h\" (UniqueName: \"kubernetes.io/projected/158c1d70-030a-44de-b9af-51dafc4857f5-kube-api-access-76r7h\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307304 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-proxy-ca-bundles\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307331 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/158c1d70-030a-44de-b9af-51dafc4857f5-tmp\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307363 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307395 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307630 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307674 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158c1d70-030a-44de-b9af-51dafc4857f5-serving-cert\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307715 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-config\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307762 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307811 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307827 5103 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307838 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307849 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3abf3af-b96a-44fa-bd40-1c92bab19b92-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307858 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307869 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4fxzw\" (UniqueName: \"kubernetes.io/projected/d3abf3af-b96a-44fa-bd40-1c92bab19b92-kube-api-access-4fxzw\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307880 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307891 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abf3af-b96a-44fa-bd40-1c92bab19b92-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307903 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgvv5\" (UniqueName: \"kubernetes.io/projected/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-kube-api-access-qgvv5\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307914 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.307924 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abf3af-b96a-44fa-bd40-1c92bab19b92-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.308451 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.308719 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.308804 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/158c1d70-030a-44de-b9af-51dafc4857f5-tmp\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.309216 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-proxy-ca-bundles\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.309369 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-config\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.309550 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158c1d70-030a-44de-b9af-51dafc4857f5-client-ca\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.310121 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.314013 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158c1d70-030a-44de-b9af-51dafc4857f5-serving-cert\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.314380 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.327870 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"route-controller-manager-75764f8cc-kl6fb\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.328790 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76r7h\" (UniqueName: \"kubernetes.io/projected/158c1d70-030a-44de-b9af-51dafc4857f5-kube-api-access-76r7h\") pod \"controller-manager-57969d489d-xkzdh\" (UID: \"158c1d70-030a-44de-b9af-51dafc4857f5\") " pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.422828 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.439103 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.531203 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.536523 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-spmxr"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.552795 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.561270 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-7csdm"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.704278 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57969d489d-xkzdh"] Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.746176 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:16:22 crc kubenswrapper[5103]: W0130 00:16:22.749075 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ad9bcf_7352_426a_8c3a_94904bd8616c.slice/crio-8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759 WatchSource:0}: Error finding container 8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759: Status 404 returned error can't find the container with id 8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759 Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.875131 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" path="/var/lib/kubelet/pods/d3abf3af-b96a-44fa-bd40-1c92bab19b92/volumes" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.875679 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee9c38e-5ed2-4ec7-9f3f-01f08bf09204" path="/var/lib/kubelet/pods/fee9c38e-5ed2-4ec7-9f3f-01f08bf09204/volumes" Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.922111 5103 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-spmxr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 00:16:22 crc kubenswrapper[5103]: I0130 00:16:22.922173 5103 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-spmxr" podUID="d3abf3af-b96a-44fa-bd40-1c92bab19b92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.203792 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" event={"ID":"158c1d70-030a-44de-b9af-51dafc4857f5","Type":"ContainerStarted","Data":"68c4fdb1edcdd731feb109f9311b3ace5c45b3a322cf99d6dc3e0c1c7fb092ed"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.204606 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" event={"ID":"158c1d70-030a-44de-b9af-51dafc4857f5","Type":"ContainerStarted","Data":"6768c319ec0bab4c451f22f569c0670d70e61c645a1dcb38b3fc1ee646eb326a"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.204652 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.209657 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerStarted","Data":"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.209708 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerStarted","Data":"8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759"} Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.210813 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.217692 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.248705 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" podStartSLOduration=2.248687877 podStartE2EDuration="2.248687877s" podCreationTimestamp="2026-01-30 00:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:23.246613906 +0000 UTC m=+373.118111978" watchObservedRunningTime="2026-01-30 00:16:23.248687877 +0000 UTC m=+373.120185949" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.252801 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" podStartSLOduration=2.252786588 podStartE2EDuration="2.252786588s" podCreationTimestamp="2026-01-30 00:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:16:23.228917362 +0000 UTC m=+373.100415434" watchObservedRunningTime="2026-01-30 00:16:23.252786588 +0000 UTC m=+373.124284650" Jan 30 00:16:23 crc kubenswrapper[5103]: I0130 00:16:23.797294 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57969d489d-xkzdh" Jan 30 00:16:58 crc kubenswrapper[5103]: I0130 00:16:58.493736 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:16:58 crc kubenswrapper[5103]: I0130 00:16:58.495417 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:01 crc kubenswrapper[5103]: I0130 00:17:01.646375 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:17:01 crc kubenswrapper[5103]: I0130 00:17:01.646838 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" containerID="cri-o://23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" gracePeriod=30 Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.003992 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.036807 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.037526 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.037552 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.037836 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerName="route-controller-manager" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.044942 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.055532 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.157918 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.157993 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158083 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158161 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158188 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") pod \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\" (UID: \"b9ad9bcf-7352-426a-8c3a-94904bd8616c\") " Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158392 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31517abb-bb81-4882-9d24-462e89cad611-serving-cert\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158426 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-config\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158474 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgt9\" (UniqueName: \"kubernetes.io/projected/31517abb-bb81-4882-9d24-462e89cad611-kube-api-access-qdgt9\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158562 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-client-ca\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158648 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31517abb-bb81-4882-9d24-462e89cad611-tmp\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.158680 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp" (OuterVolumeSpecName: "tmp") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.159282 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.159359 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config" (OuterVolumeSpecName: "config") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.169366 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg" (OuterVolumeSpecName: "kube-api-access-7dnmg") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "kube-api-access-7dnmg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.169378 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9ad9bcf-7352-426a-8c3a-94904bd8616c" (UID: "b9ad9bcf-7352-426a-8c3a-94904bd8616c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259589 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31517abb-bb81-4882-9d24-462e89cad611-serving-cert\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259663 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-config\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259704 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgt9\" (UniqueName: \"kubernetes.io/projected/31517abb-bb81-4882-9d24-462e89cad611-kube-api-access-qdgt9\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259802 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-client-ca\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259890 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31517abb-bb81-4882-9d24-462e89cad611-tmp\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259956 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7dnmg\" (UniqueName: \"kubernetes.io/projected/b9ad9bcf-7352-426a-8c3a-94904bd8616c-kube-api-access-7dnmg\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.259975 5103 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ad9bcf-7352-426a-8c3a-94904bd8616c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260024 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b9ad9bcf-7352-426a-8c3a-94904bd8616c-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260046 5103 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260125 5103 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9ad9bcf-7352-426a-8c3a-94904bd8616c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.260921 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/31517abb-bb81-4882-9d24-462e89cad611-tmp\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.261621 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-config\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.261765 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31517abb-bb81-4882-9d24-462e89cad611-client-ca\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.265246 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31517abb-bb81-4882-9d24-462e89cad611-serving-cert\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.293255 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgt9\" (UniqueName: \"kubernetes.io/projected/31517abb-bb81-4882-9d24-462e89cad611-kube-api-access-qdgt9\") pod \"route-controller-manager-67bd567449-lhd5w\" (UID: \"31517abb-bb81-4882-9d24-462e89cad611\") " pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.368760 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.494916 5103 generic.go:358] "Generic (PLEG): container finished" podID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" exitCode=0 Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496034 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerDied","Data":"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903"} Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496103 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" event={"ID":"b9ad9bcf-7352-426a-8c3a-94904bd8616c","Type":"ContainerDied","Data":"8cf5d8f319e48117f88db881cdbccdb15a5652356eefcbc633994caf67c03759"} Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496132 5103 scope.go:117] "RemoveContainer" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.496335 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.560472 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.565142 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75764f8cc-kl6fb"] Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.566701 5103 scope.go:117] "RemoveContainer" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" Jan 30 00:17:02 crc kubenswrapper[5103]: E0130 00:17:02.567817 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903\": container with ID starting with 23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903 not found: ID does not exist" containerID="23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.567857 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903"} err="failed to get container status \"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903\": rpc error: code = NotFound desc = could not find container \"23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903\": container with ID starting with 23a5eb9923298a9ba3d1dd66e38ba6d4175fc840bfcd9e2093a84a93960ee903 not found: ID does not exist" Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.665877 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w"] Jan 30 00:17:02 crc kubenswrapper[5103]: W0130 00:17:02.673763 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31517abb_bb81_4882_9d24_462e89cad611.slice/crio-6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6 WatchSource:0}: Error finding container 6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6: Status 404 returned error can't find the container with id 6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6 Jan 30 00:17:02 crc kubenswrapper[5103]: I0130 00:17:02.879281 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ad9bcf-7352-426a-8c3a-94904bd8616c" path="/var/lib/kubelet/pods/b9ad9bcf-7352-426a-8c3a-94904bd8616c/volumes" Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.507352 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" event={"ID":"31517abb-bb81-4882-9d24-462e89cad611","Type":"ContainerStarted","Data":"c3723f8482f35cd737f362bcd21a14284d94808d4d4cffff06ef6755f73b52e6"} Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.507432 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" event={"ID":"31517abb-bb81-4882-9d24-462e89cad611","Type":"ContainerStarted","Data":"6be16c3da02e3541efaad9a6ccf23cad0f520e1f616c1f0f96909a61742a42a6"} Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.507686 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.515809 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" Jan 30 00:17:03 crc kubenswrapper[5103]: I0130 00:17:03.532657 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67bd567449-lhd5w" podStartSLOduration=2.532634285 podStartE2EDuration="2.532634285s" podCreationTimestamp="2026-01-30 00:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:17:03.531489937 +0000 UTC m=+413.402987999" watchObservedRunningTime="2026-01-30 00:17:03.532634285 +0000 UTC m=+413.404132337" Jan 30 00:17:28 crc kubenswrapper[5103]: I0130 00:17:28.493551 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:28 crc kubenswrapper[5103]: I0130 00:17:28.494127 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.493784 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.494669 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.494755 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.495863 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.495995 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590" gracePeriod=600 Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.945834 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590" exitCode=0 Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.945925 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590"} Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.946305 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76"} Jan 30 00:17:58 crc kubenswrapper[5103]: I0130 00:17:58.946334 5103 scope.go:117] "RemoveContainer" containerID="47d4649f628f9ff08c1eae857ce8b6a70f66ec474c9229aafcc4d26442b014bb" Jan 30 00:18:03 crc kubenswrapper[5103]: I0130 00:18:03.097808 5103 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 00:18:03 crc kubenswrapper[5103]: I0130 00:18:03.207185 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:18:09 crc kubenswrapper[5103]: I0130 00:18:09.723015 5103 ???:1] "http: TLS handshake error from 192.168.126.11:49936: no serving certificate available for the kubelet" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.261151 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" containerID="cri-o://661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" gracePeriod=15 Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.730624 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.784283 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8696675b97-lqpdm"] Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.785177 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.785207 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.785318 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerName="oauth-openshift" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.794750 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8696675b97-lqpdm"] Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.794913 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797103 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797205 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797258 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797329 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797363 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797392 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797416 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797522 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797516 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797559 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797592 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797622 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797644 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797669 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797729 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") pod \"10feec13-3e3a-46a2-8fdd-c1098eebd334\" (UID: \"10feec13-3e3a-46a2-8fdd-c1098eebd334\") " Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.797949 5103 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.798367 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.798864 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.798887 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.799432 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.804252 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.804278 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.804419 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.807427 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk" (OuterVolumeSpecName: "kube-api-access-7h6wk") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "kube-api-access-7h6wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.807473 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.813480 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.821253 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.822196 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.824754 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "10feec13-3e3a-46a2-8fdd-c1098eebd334" (UID: "10feec13-3e3a-46a2-8fdd-c1098eebd334"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898665 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898731 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898755 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898780 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-session\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898823 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898867 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898901 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898937 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z9sv\" (UniqueName: \"kubernetes.io/projected/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-kube-api-access-6z9sv\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898961 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-dir\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.898998 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-policies\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899021 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899074 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899115 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899147 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899226 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899241 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899256 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899269 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899282 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899295 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899307 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899321 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899335 5103 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899347 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7h6wk\" (UniqueName: \"kubernetes.io/projected/10feec13-3e3a-46a2-8fdd-c1098eebd334-kube-api-access-7h6wk\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899358 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899370 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:28 crc kubenswrapper[5103]: I0130 00:18:28.899385 5103 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/10feec13-3e3a-46a2-8fdd-c1098eebd334-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.000891 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.000990 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001358 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001507 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001825 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001864 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-session\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.001946 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002002 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002102 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002172 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6z9sv\" (UniqueName: \"kubernetes.io/projected/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-kube-api-access-6z9sv\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002208 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-dir\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002284 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-policies\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002315 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.002403 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.003658 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-dir\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006590 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006729 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006895 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-audit-policies\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.006949 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.007400 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.007652 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.009150 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.010810 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.011894 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.013242 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.013951 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.014750 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-v4-0-config-system-session\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.021473 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z9sv\" (UniqueName: \"kubernetes.io/projected/e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d-kube-api-access-6z9sv\") pod \"oauth-openshift-8696675b97-lqpdm\" (UID: \"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d\") " pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.152888 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154758 5103 generic.go:358] "Generic (PLEG): container finished" podID="10feec13-3e3a-46a2-8fdd-c1098eebd334" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" exitCode=0 Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154818 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerDied","Data":"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53"} Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154858 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" event={"ID":"10feec13-3e3a-46a2-8fdd-c1098eebd334","Type":"ContainerDied","Data":"e3d46683d3f3d86228a063dcb193d36e8067e6dad542d18de17ac86ad6dc3b86"} Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.154885 5103 scope.go:117] "RemoveContainer" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.155093 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-r9ddz" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.177489 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.182895 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-r9ddz"] Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.191784 5103 scope.go:117] "RemoveContainer" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" Jan 30 00:18:29 crc kubenswrapper[5103]: E0130 00:18:29.192369 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53\": container with ID starting with 661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53 not found: ID does not exist" containerID="661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.192565 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53"} err="failed to get container status \"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53\": rpc error: code = NotFound desc = could not find container \"661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53\": container with ID starting with 661b690e23a18421cf1f6bea999077d53a164f010e4c9253ec0222cc1ef16b53 not found: ID does not exist" Jan 30 00:18:29 crc kubenswrapper[5103]: I0130 00:18:29.398361 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8696675b97-lqpdm"] Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.164366 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" event={"ID":"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d","Type":"ContainerStarted","Data":"9301a08c12aea9cb302ac0b756f739416190283b29d56b91af5ee52511ca98cd"} Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.165180 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" event={"ID":"e3ef6cf2-7ff8-4421-9e0e-3faaed5fca0d","Type":"ContainerStarted","Data":"cd88805702bcd9241ba5e98510c9c7947528f28b61ee5c64b8f4362451d75c8c"} Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.167544 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.192533 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" podStartSLOduration=27.192514579 podStartE2EDuration="27.192514579s" podCreationTimestamp="2026-01-30 00:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:18:30.187244119 +0000 UTC m=+500.058742181" watchObservedRunningTime="2026-01-30 00:18:30.192514579 +0000 UTC m=+500.064012631" Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.495180 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8696675b97-lqpdm" Jan 30 00:18:30 crc kubenswrapper[5103]: I0130 00:18:30.887681 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10feec13-3e3a-46a2-8fdd-c1098eebd334" path="/var/lib/kubelet/pods/10feec13-3e3a-46a2-8fdd-c1098eebd334/volumes" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.467700 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.468695 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7c7gb" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" containerID="cri-o://7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.474988 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.475333 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbjkv" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" containerID="cri-o://775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.489557 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.489808 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" containerID="cri-o://bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.504460 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.504742 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z59s8" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" containerID="cri-o://9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.510295 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-m7wbv"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.515545 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.519450 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.519945 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2rjzw" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" containerID="cri-o://ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" gracePeriod=30 Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.526526 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-m7wbv"] Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.648762 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtqr\" (UniqueName: \"kubernetes.io/projected/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-kube-api-access-djtqr\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.649227 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.649272 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-tmp\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.649323 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.753945 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.753989 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-tmp\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.754026 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.754065 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djtqr\" (UniqueName: \"kubernetes.io/projected/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-kube-api-access-djtqr\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.754797 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-tmp\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.755219 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.760797 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.771814 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtqr\" (UniqueName: \"kubernetes.io/projected/0180b3c6-131f-4a8c-ac9a-1b410e056ae2-kube-api-access-djtqr\") pod \"marketplace-operator-547dbd544d-m7wbv\" (UID: \"0180b3c6-131f-4a8c-ac9a-1b410e056ae2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.899917 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.904808 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.912567 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.914808 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.929271 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.929344 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:18:59 crc kubenswrapper[5103]: I0130 00:18:59.935254 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058494 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") pod \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058855 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058912 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") pod \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.058937 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") pod \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.059616 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp" (OuterVolumeSpecName: "tmp") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.059986 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities" (OuterVolumeSpecName: "utilities") pod "6c3bfb26-42f9-43f4-8126-b941aea6ecca" (UID: "6c3bfb26-42f9-43f4-8126-b941aea6ecca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061570 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061636 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") pod \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061667 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") pod \"c312b248-250c-4b33-9c7a-f79c1e73a75b\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061690 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") pod \"c312b248-250c-4b33-9c7a-f79c1e73a75b\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061749 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") pod \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061786 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") pod \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061931 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") pod \"c312b248-250c-4b33-9c7a-f79c1e73a75b\" (UID: \"c312b248-250c-4b33-9c7a-f79c1e73a75b\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.061983 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062012 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") pod \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\" (UID: \"ebb7f7db-c773-49f6-b58b-6bd929f25f3a\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062036 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") pod \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\" (UID: \"b15f695a-0fc1-4ab5-aad2-341f3bf6822d\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062578 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") pod \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\" (UID: \"6c3bfb26-42f9-43f4-8126-b941aea6ecca\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062629 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") pod \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\" (UID: \"9807e5f5-fa63-4e0c-9b52-3c0044337c40\") " Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.062704 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063332 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063357 5103 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-tmp\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063369 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063812 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities" (OuterVolumeSpecName: "utilities") pod "ebb7f7db-c773-49f6-b58b-6bd929f25f3a" (UID: "ebb7f7db-c773-49f6-b58b-6bd929f25f3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063834 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities" (OuterVolumeSpecName: "utilities") pod "9807e5f5-fa63-4e0c-9b52-3c0044337c40" (UID: "9807e5f5-fa63-4e0c-9b52-3c0044337c40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.063883 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities" (OuterVolumeSpecName: "utilities") pod "c312b248-250c-4b33-9c7a-f79c1e73a75b" (UID: "c312b248-250c-4b33-9c7a-f79c1e73a75b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.064999 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw" (OuterVolumeSpecName: "kube-api-access-6bzkw") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "kube-api-access-6bzkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.065768 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b15f695a-0fc1-4ab5-aad2-341f3bf6822d" (UID: "b15f695a-0fc1-4ab5-aad2-341f3bf6822d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.066973 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x" (OuterVolumeSpecName: "kube-api-access-qtd7x") pod "6c3bfb26-42f9-43f4-8126-b941aea6ecca" (UID: "6c3bfb26-42f9-43f4-8126-b941aea6ecca"). InnerVolumeSpecName "kube-api-access-qtd7x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.078429 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz" (OuterVolumeSpecName: "kube-api-access-zdntz") pod "9807e5f5-fa63-4e0c-9b52-3c0044337c40" (UID: "9807e5f5-fa63-4e0c-9b52-3c0044337c40"). InnerVolumeSpecName "kube-api-access-zdntz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.080889 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8" (OuterVolumeSpecName: "kube-api-access-4gwr8") pod "c312b248-250c-4b33-9c7a-f79c1e73a75b" (UID: "c312b248-250c-4b33-9c7a-f79c1e73a75b"). InnerVolumeSpecName "kube-api-access-4gwr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.081309 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c312b248-250c-4b33-9c7a-f79c1e73a75b" (UID: "c312b248-250c-4b33-9c7a-f79c1e73a75b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.083525 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5" (OuterVolumeSpecName: "kube-api-access-cgxx5") pod "ebb7f7db-c773-49f6-b58b-6bd929f25f3a" (UID: "ebb7f7db-c773-49f6-b58b-6bd929f25f3a"). InnerVolumeSpecName "kube-api-access-cgxx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.110869 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebb7f7db-c773-49f6-b58b-6bd929f25f3a" (UID: "ebb7f7db-c773-49f6-b58b-6bd929f25f3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.112350 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9807e5f5-fa63-4e0c-9b52-3c0044337c40" (UID: "9807e5f5-fa63-4e0c-9b52-3c0044337c40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164469 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164504 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gwr8\" (UniqueName: \"kubernetes.io/projected/c312b248-250c-4b33-9c7a-f79c1e73a75b-kube-api-access-4gwr8\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164519 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cgxx5\" (UniqueName: \"kubernetes.io/projected/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-kube-api-access-cgxx5\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164529 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdntz\" (UniqueName: \"kubernetes.io/projected/9807e5f5-fa63-4e0c-9b52-3c0044337c40-kube-api-access-zdntz\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164541 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c312b248-250c-4b33-9c7a-f79c1e73a75b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164551 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164562 5103 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164574 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qtd7x\" (UniqueName: \"kubernetes.io/projected/6c3bfb26-42f9-43f4-8126-b941aea6ecca-kube-api-access-qtd7x\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164587 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164598 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9807e5f5-fa63-4e0c-9b52-3c0044337c40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164608 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bzkw\" (UniqueName: \"kubernetes.io/projected/b15f695a-0fc1-4ab5-aad2-341f3bf6822d-kube-api-access-6bzkw\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.164615 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb7f7db-c773-49f6-b58b-6bd929f25f3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.184789 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c3bfb26-42f9-43f4-8126-b941aea6ecca" (UID: "6c3bfb26-42f9-43f4-8126-b941aea6ecca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.266528 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3bfb26-42f9-43f4-8126-b941aea6ecca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.339291 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-m7wbv"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367434 5103 generic.go:358] "Generic (PLEG): container finished" podID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367521 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c7gb" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367544 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367656 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c7gb" event={"ID":"ebb7f7db-c773-49f6-b58b-6bd929f25f3a","Type":"ContainerDied","Data":"b5cac0fe83167992a8ae22830c4af1a52661a8e624e0749533087d96d73359ba"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.367702 5103 scope.go:117] "RemoveContainer" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.371697 5103 generic.go:358] "Generic (PLEG): container finished" podID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.371927 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbjkv" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.371929 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.372239 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbjkv" event={"ID":"9807e5f5-fa63-4e0c-9b52-3c0044337c40","Type":"ContainerDied","Data":"61764b58f50ceebb2c7b19c23cfca937d7976fd5804c25d5eefbebe83ee09940"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.374927 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" event={"ID":"0180b3c6-131f-4a8c-ac9a-1b410e056ae2","Type":"ContainerStarted","Data":"40913b27a3c7f0d304d4dc9072ac1226e961880500f8c0246062547e1fc5e20b"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377734 5103 generic.go:358] "Generic (PLEG): container finished" podID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377847 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377876 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rjzw" event={"ID":"6c3bfb26-42f9-43f4-8126-b941aea6ecca","Type":"ContainerDied","Data":"06c99f794d6099db2b3382cfe3ae52362055fdf833d1abdcf54ef653697a4f26"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.377995 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rjzw" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385644 5103 generic.go:358] "Generic (PLEG): container finished" podID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385752 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385778 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59s8" event={"ID":"c312b248-250c-4b33-9c7a-f79c1e73a75b","Type":"ContainerDied","Data":"e00554ee3ee9141178c8c93a9b221de3559a21de326b50788319212bb34c00ff"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.385853 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59s8" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390208 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-mf247_b15f695a-0fc1-4ab5-aad2-341f3bf6822d/marketplace-operator/1.log" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390262 5103 generic.go:358] "Generic (PLEG): container finished" podID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" exitCode=0 Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390314 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390344 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" event={"ID":"b15f695a-0fc1-4ab5-aad2-341f3bf6822d","Type":"ContainerDied","Data":"0368a0c326937f9c7deb7edf4ed88ddf03334595ee1cd83191767d2fb8e30f45"} Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.390505 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-mf247" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.394410 5103 scope.go:117] "RemoveContainer" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.422658 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.428237 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7c7gb"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.432567 5103 scope.go:117] "RemoveContainer" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.464638 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.472745 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbjkv"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.477702 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.484762 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2rjzw"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.486245 5103 scope.go:117] "RemoveContainer" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.486716 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858\": container with ID starting with 7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858 not found: ID does not exist" containerID="7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.486918 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858"} err="failed to get container status \"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858\": rpc error: code = NotFound desc = could not find container \"7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858\": container with ID starting with 7e47b540180e94eadb6c68f97be4771f79eb02a379d31d9ef66d2752b2a79858 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.487094 5103 scope.go:117] "RemoveContainer" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.487598 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be\": container with ID starting with a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be not found: ID does not exist" containerID="a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.487643 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be"} err="failed to get container status \"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be\": rpc error: code = NotFound desc = could not find container \"a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be\": container with ID starting with a310065ce62b519664fbbc8c0e146d2d5099a03afe07298cd4726f58e29742be not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.487670 5103 scope.go:117] "RemoveContainer" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.488265 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153\": container with ID starting with 2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153 not found: ID does not exist" containerID="2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.488290 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153"} err="failed to get container status \"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153\": rpc error: code = NotFound desc = could not find container \"2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153\": container with ID starting with 2921a64ab07b2e211b7bf236b2b451e5bd16a4f098fecd8f08172a243f6c2153 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.488306 5103 scope.go:117] "RemoveContainer" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.498378 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.507646 5103 scope.go:117] "RemoveContainer" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.507755 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-mf247"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.512574 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.518464 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59s8"] Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.524794 5103 scope.go:117] "RemoveContainer" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.540890 5103 scope.go:117] "RemoveContainer" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.543341 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147\": container with ID starting with 775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147 not found: ID does not exist" containerID="775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543397 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147"} err="failed to get container status \"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147\": rpc error: code = NotFound desc = could not find container \"775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147\": container with ID starting with 775b16208a120a7aa2052cd022cdf00369ebcf8df3d4445f489f5eb239a68147 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543430 5103 scope.go:117] "RemoveContainer" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.543806 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5\": container with ID starting with 1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5 not found: ID does not exist" containerID="1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543847 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5"} err="failed to get container status \"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5\": rpc error: code = NotFound desc = could not find container \"1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5\": container with ID starting with 1c95bbda72b4402981ee1d47cfab89f90f7032f5178d3f7784d66f376f3bacb5 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.543878 5103 scope.go:117] "RemoveContainer" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.544153 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f\": container with ID starting with b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f not found: ID does not exist" containerID="b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.544183 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f"} err="failed to get container status \"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f\": rpc error: code = NotFound desc = could not find container \"b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f\": container with ID starting with b67ec93340623ddb972d36c34757d5a370d713f8b7ec11b4b06ab27a06ead16f not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.544201 5103 scope.go:117] "RemoveContainer" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.557858 5103 scope.go:117] "RemoveContainer" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.587418 5103 scope.go:117] "RemoveContainer" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.603475 5103 scope.go:117] "RemoveContainer" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.603745 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0\": container with ID starting with ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0 not found: ID does not exist" containerID="ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.603776 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0"} err="failed to get container status \"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0\": rpc error: code = NotFound desc = could not find container \"ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0\": container with ID starting with ee5164d868b8059f161ce024f7ed1ef501965f6aafb0e79601b3058b365a87d0 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.603802 5103 scope.go:117] "RemoveContainer" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.604008 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708\": container with ID starting with f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708 not found: ID does not exist" containerID="f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604032 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708"} err="failed to get container status \"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708\": rpc error: code = NotFound desc = could not find container \"f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708\": container with ID starting with f210b8d17bc950a6a9b9be34d83d1c3b5e2462542f0b428a4b605efaddf27708 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604066 5103 scope.go:117] "RemoveContainer" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.604372 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f\": container with ID starting with fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f not found: ID does not exist" containerID="fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604421 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f"} err="failed to get container status \"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f\": rpc error: code = NotFound desc = could not find container \"fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f\": container with ID starting with fecc13f1f25ca29ff57d6b158b400cca9670990af3d4aafe538ba707c472c10f not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.604437 5103 scope.go:117] "RemoveContainer" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.620940 5103 scope.go:117] "RemoveContainer" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.673939 5103 scope.go:117] "RemoveContainer" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.691622 5103 scope.go:117] "RemoveContainer" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.692071 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39\": container with ID starting with 9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39 not found: ID does not exist" containerID="9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692112 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39"} err="failed to get container status \"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39\": rpc error: code = NotFound desc = could not find container \"9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39\": container with ID starting with 9ecad4d1fe056e75fef1caecdc5f5bdcc1be75e883d774efac718a288d6f8e39 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692139 5103 scope.go:117] "RemoveContainer" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.692436 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292\": container with ID starting with 92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292 not found: ID does not exist" containerID="92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692459 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292"} err="failed to get container status \"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292\": rpc error: code = NotFound desc = could not find container \"92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292\": container with ID starting with 92cbcab8b5188b0795a512fff83aaa35ff7cf1af8ead1fe836d532c332abf292 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.692476 5103 scope.go:117] "RemoveContainer" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.693151 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35\": container with ID starting with 8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35 not found: ID does not exist" containerID="8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.693176 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35"} err="failed to get container status \"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35\": rpc error: code = NotFound desc = could not find container \"8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35\": container with ID starting with 8a2d593304000c09e07348dbf4bd56837138b18fb49befd388d9f8e1e633bb35 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.693191 5103 scope.go:117] "RemoveContainer" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.708276 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.727240 5103 scope.go:117] "RemoveContainer" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.727659 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1\": container with ID starting with bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1 not found: ID does not exist" containerID="bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.727719 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1"} err="failed to get container status \"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1\": rpc error: code = NotFound desc = could not find container \"bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1\": container with ID starting with bf952f3693dc145187e46daed40151260024ee630ca1dd24be741bcdcee07fd1 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.727756 5103 scope.go:117] "RemoveContainer" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:19:00 crc kubenswrapper[5103]: E0130 00:19:00.728157 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0\": container with ID starting with 3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0 not found: ID does not exist" containerID="3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.728193 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0"} err="failed to get container status \"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0\": rpc error: code = NotFound desc = could not find container \"3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0\": container with ID starting with 3dddbd89b982c6adaa788ec2f72e64e99775df1788210b2080e09f006e68d1a0 not found: ID does not exist" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.876657 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" path="/var/lib/kubelet/pods/6c3bfb26-42f9-43f4-8126-b941aea6ecca/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.877773 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" path="/var/lib/kubelet/pods/9807e5f5-fa63-4e0c-9b52-3c0044337c40/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.878730 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" path="/var/lib/kubelet/pods/b15f695a-0fc1-4ab5-aad2-341f3bf6822d/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.879990 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" path="/var/lib/kubelet/pods/c312b248-250c-4b33-9c7a-f79c1e73a75b/volumes" Jan 30 00:19:00 crc kubenswrapper[5103]: I0130 00:19:00.880804 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" path="/var/lib/kubelet/pods/ebb7f7db-c773-49f6-b58b-6bd929f25f3a/volumes" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.405907 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" event={"ID":"0180b3c6-131f-4a8c-ac9a-1b410e056ae2","Type":"ContainerStarted","Data":"13897c56a6b4836c8273e8f74e9c06cfba82e6ca2ab6094ff098d5d5a49883b7"} Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.406126 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.411971 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.427389 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-m7wbv" podStartSLOduration=2.42735038 podStartE2EDuration="2.42735038s" podCreationTimestamp="2026-01-30 00:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:19:01.426816257 +0000 UTC m=+531.298314349" watchObservedRunningTime="2026-01-30 00:19:01.42735038 +0000 UTC m=+531.298848472" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.487286 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489374 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489421 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489436 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489445 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489480 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489489 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489502 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489510 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489526 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489533 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489566 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489573 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489583 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489591 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489602 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489610 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489637 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489645 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="extract-content" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489665 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489672 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489682 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489689 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="extract-utilities" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489726 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489733 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489741 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489748 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489757 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489766 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489896 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489912 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489922 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="9807e5f5-fa63-4e0c-9b52-3c0044337c40" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489956 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="c312b248-250c-4b33-9c7a-f79c1e73a75b" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489968 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="ebb7f7db-c773-49f6-b58b-6bd929f25f3a" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.489977 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c3bfb26-42f9-43f4-8126-b941aea6ecca" containerName="registry-server" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.490123 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.490152 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.490311 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b15f695a-0fc1-4ab5-aad2-341f3bf6822d" containerName="marketplace-operator" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.513866 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.514010 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.516277 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.700462 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.700549 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.700576 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802183 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802445 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802652 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.802726 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.824795 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"redhat-marketplace-29m6m\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:01 crc kubenswrapper[5103]: I0130 00:19:01.838813 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.044408 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:19:02 crc kubenswrapper[5103]: W0130 00:19:02.057446 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c68a080_5bee_4c96_8683_dfbc9187c20f.slice/crio-93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e WatchSource:0}: Error finding container 93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e: Status 404 returned error can't find the container with id 93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.078930 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vq6tr"] Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.090171 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.091980 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq6tr"] Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.093154 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.105820 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-catalog-content\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.105970 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-utilities\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.106027 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpltr\" (UniqueName: \"kubernetes.io/projected/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-kube-api-access-kpltr\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.207392 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-catalog-content\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.207785 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-utilities\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.207813 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kpltr\" (UniqueName: \"kubernetes.io/projected/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-kube-api-access-kpltr\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.208667 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-catalog-content\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.208748 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-utilities\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.230701 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpltr\" (UniqueName: \"kubernetes.io/projected/a044cd80-0a4b-43d0-bfa8-107bddaa28fc-kube-api-access-kpltr\") pod \"certified-operators-vq6tr\" (UID: \"a044cd80-0a4b-43d0-bfa8-107bddaa28fc\") " pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.415918 5103 generic.go:358] "Generic (PLEG): container finished" podID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerID="44a4d6d1f7b80ae12b217c95d1dbfec630c58aa07e5059535d601fbdbef544c4" exitCode=0 Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.417752 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"44a4d6d1f7b80ae12b217c95d1dbfec630c58aa07e5059535d601fbdbef544c4"} Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.417873 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerStarted","Data":"93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e"} Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.436007 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:02 crc kubenswrapper[5103]: I0130 00:19:02.854362 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq6tr"] Jan 30 00:19:02 crc kubenswrapper[5103]: W0130 00:19:02.865335 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda044cd80_0a4b_43d0_bfa8_107bddaa28fc.slice/crio-031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f WatchSource:0}: Error finding container 031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f: Status 404 returned error can't find the container with id 031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.427527 5103 generic.go:358] "Generic (PLEG): container finished" podID="a044cd80-0a4b-43d0-bfa8-107bddaa28fc" containerID="56796670bdd69ae09dc9e44816d52f869952458f5b4179e2b791a86641393e0f" exitCode=0 Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.427650 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerDied","Data":"56796670bdd69ae09dc9e44816d52f869952458f5b4179e2b791a86641393e0f"} Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.427698 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerStarted","Data":"031e835559fcd26eb9fa3a47383c180a19034a4fb256138fe389435481e3d80f"} Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.885962 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wmvfq"] Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.892807 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.896983 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.902565 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmvfq"] Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936169 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-utilities\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936220 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-catalog-content\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936271 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhrl\" (UniqueName: \"kubernetes.io/projected/fc2ed764-8df0-4a15-9d66-c2abad3ee367-kube-api-access-cdhrl\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.936963 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-2wtrh"] Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.946603 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:03 crc kubenswrapper[5103]: I0130 00:19:03.968499 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-2wtrh"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037775 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-trusted-ca\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037828 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037935 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-utilities\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.037982 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c0646b67-80e8-42d2-8d99-b1870fd68749-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038045 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-catalog-content\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038086 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-tls\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038165 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-bound-sa-token\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038251 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-certificates\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038315 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhrl\" (UniqueName: \"kubernetes.io/projected/fc2ed764-8df0-4a15-9d66-c2abad3ee367-kube-api-access-cdhrl\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038415 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c0646b67-80e8-42d2-8d99-b1870fd68749-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038514 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbgl\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-kube-api-access-hbbgl\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038597 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-utilities\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.038680 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc2ed764-8df0-4a15-9d66-c2abad3ee367-catalog-content\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.058684 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.059099 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhrl\" (UniqueName: \"kubernetes.io/projected/fc2ed764-8df0-4a15-9d66-c2abad3ee367-kube-api-access-cdhrl\") pod \"redhat-operators-wmvfq\" (UID: \"fc2ed764-8df0-4a15-9d66-c2abad3ee367\") " pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140038 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbgl\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-kube-api-access-hbbgl\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140116 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-trusted-ca\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140159 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c0646b67-80e8-42d2-8d99-b1870fd68749-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140180 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-tls\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140198 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-bound-sa-token\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140221 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-certificates\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140419 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c0646b67-80e8-42d2-8d99-b1870fd68749-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.140846 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c0646b67-80e8-42d2-8d99-b1870fd68749-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.141760 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-certificates\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.141936 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0646b67-80e8-42d2-8d99-b1870fd68749-trusted-ca\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.143976 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c0646b67-80e8-42d2-8d99-b1870fd68749-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.144366 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-registry-tls\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.155254 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbgl\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-kube-api-access-hbbgl\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.159966 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0646b67-80e8-42d2-8d99-b1870fd68749-bound-sa-token\") pod \"image-registry-5d9d95bf5b-2wtrh\" (UID: \"c0646b67-80e8-42d2-8d99-b1870fd68749\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.220860 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.262641 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.440988 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerStarted","Data":"a0fc36499b6defb27a39ee5ad3e68913b8a723a3951ab5f92ca95e3af9a146d8"} Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.443451 5103 generic.go:358] "Generic (PLEG): container finished" podID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerID="e28c324607a0aa3b715230dc818fcdca18f72d1a3d44777010087b06d0384ded" exitCode=0 Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.443541 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"e28c324607a0aa3b715230dc818fcdca18f72d1a3d44777010087b06d0384ded"} Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.443559 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmvfq"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.492102 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gz47"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.500739 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.501015 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gz47"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.502940 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.505139 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-2wtrh"] Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.551433 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dq9n\" (UniqueName: \"kubernetes.io/projected/5fd1ccc1-87a2-43d0-9183-1e907f804a16-kube-api-access-8dq9n\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.551604 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-utilities\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.551710 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-catalog-content\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.652963 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8dq9n\" (UniqueName: \"kubernetes.io/projected/5fd1ccc1-87a2-43d0-9183-1e907f804a16-kube-api-access-8dq9n\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.653840 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-utilities\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.656394 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-utilities\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.656599 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-catalog-content\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.656954 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd1ccc1-87a2-43d0-9183-1e907f804a16-catalog-content\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.686532 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dq9n\" (UniqueName: \"kubernetes.io/projected/5fd1ccc1-87a2-43d0-9183-1e907f804a16-kube-api-access-8dq9n\") pod \"community-operators-4gz47\" (UID: \"5fd1ccc1-87a2-43d0-9183-1e907f804a16\") " pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:04 crc kubenswrapper[5103]: I0130 00:19:04.833766 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.014420 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gz47"] Jan 30 00:19:05 crc kubenswrapper[5103]: W0130 00:19:05.021887 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd1ccc1_87a2_43d0_9183_1e907f804a16.slice/crio-4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124 WatchSource:0}: Error finding container 4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124: Status 404 returned error can't find the container with id 4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.451096 5103 generic.go:358] "Generic (PLEG): container finished" podID="fc2ed764-8df0-4a15-9d66-c2abad3ee367" containerID="973bd685f3ccbbaddd3b49dd0f04cc38187a240864159c845f7057275144dd10" exitCode=0 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.451190 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerDied","Data":"973bd685f3ccbbaddd3b49dd0f04cc38187a240864159c845f7057275144dd10"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.451244 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerStarted","Data":"9c710c8d15552c2920d34de83d6efca72c1149ac37c0a88d0cdf3e52b54843c7"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.455332 5103 generic.go:358] "Generic (PLEG): container finished" podID="a044cd80-0a4b-43d0-bfa8-107bddaa28fc" containerID="a0fc36499b6defb27a39ee5ad3e68913b8a723a3951ab5f92ca95e3af9a146d8" exitCode=0 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.455452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerDied","Data":"a0fc36499b6defb27a39ee5ad3e68913b8a723a3951ab5f92ca95e3af9a146d8"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.458834 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerStarted","Data":"445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.461611 5103 generic.go:358] "Generic (PLEG): container finished" podID="5fd1ccc1-87a2-43d0-9183-1e907f804a16" containerID="de1441470b3cd6741e15b71b1ffd200dd84612fb9d93c2b2c686102ea1985d1a" exitCode=0 Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.461698 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerDied","Data":"de1441470b3cd6741e15b71b1ffd200dd84612fb9d93c2b2c686102ea1985d1a"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.461725 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerStarted","Data":"4262c76c43500c907273e53f585b59e3b6a3c6dfeb1c1827654e4b804fa8b124"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.467072 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" event={"ID":"c0646b67-80e8-42d2-8d99-b1870fd68749","Type":"ContainerStarted","Data":"67622fb5ebea95e5b06d5ffa8816f0019f894435e76ebdc3d22070183e5138d7"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.467112 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" event={"ID":"c0646b67-80e8-42d2-8d99-b1870fd68749","Type":"ContainerStarted","Data":"0a3cb38d200ae3e472aa0224baf5cbda58215d57aaba0960eaa727d40139c366"} Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.467580 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.511416 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" podStartSLOduration=2.511398872 podStartE2EDuration="2.511398872s" podCreationTimestamp="2026-01-30 00:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:19:05.50732368 +0000 UTC m=+535.378821762" watchObservedRunningTime="2026-01-30 00:19:05.511398872 +0000 UTC m=+535.382896944" Jan 30 00:19:05 crc kubenswrapper[5103]: I0130 00:19:05.529246 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-29m6m" podStartSLOduration=3.657254541 podStartE2EDuration="4.529225397s" podCreationTimestamp="2026-01-30 00:19:01 +0000 UTC" firstStartedPulling="2026-01-30 00:19:02.417590667 +0000 UTC m=+532.289088719" lastFinishedPulling="2026-01-30 00:19:03.289561483 +0000 UTC m=+533.161059575" observedRunningTime="2026-01-30 00:19:05.528411516 +0000 UTC m=+535.399909578" watchObservedRunningTime="2026-01-30 00:19:05.529225397 +0000 UTC m=+535.400723469" Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.475452 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerStarted","Data":"d316bbfda43a2c82e13b05051fbf85abc35c00f84cb0fb689431080ec46ddc41"} Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.477949 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq6tr" event={"ID":"a044cd80-0a4b-43d0-bfa8-107bddaa28fc","Type":"ContainerStarted","Data":"461b7270878817012b7cd8e6aae200369e0a4f00b80dbc35dbb6996276b704aa"} Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.481158 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerStarted","Data":"222d54563f7ea33a689b0ac58815327cffe1f881b8d06be59183e3c7bde4b359"} Jan 30 00:19:06 crc kubenswrapper[5103]: I0130 00:19:06.518860 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vq6tr" podStartSLOduration=3.739159207 podStartE2EDuration="4.518844969s" podCreationTimestamp="2026-01-30 00:19:02 +0000 UTC" firstStartedPulling="2026-01-30 00:19:03.42886662 +0000 UTC m=+533.300364702" lastFinishedPulling="2026-01-30 00:19:04.208552372 +0000 UTC m=+534.080050464" observedRunningTime="2026-01-30 00:19:06.518429049 +0000 UTC m=+536.389927101" watchObservedRunningTime="2026-01-30 00:19:06.518844969 +0000 UTC m=+536.390343021" Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.490071 5103 generic.go:358] "Generic (PLEG): container finished" podID="fc2ed764-8df0-4a15-9d66-c2abad3ee367" containerID="d316bbfda43a2c82e13b05051fbf85abc35c00f84cb0fb689431080ec46ddc41" exitCode=0 Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.490255 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerDied","Data":"d316bbfda43a2c82e13b05051fbf85abc35c00f84cb0fb689431080ec46ddc41"} Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.492714 5103 generic.go:358] "Generic (PLEG): container finished" podID="5fd1ccc1-87a2-43d0-9183-1e907f804a16" containerID="222d54563f7ea33a689b0ac58815327cffe1f881b8d06be59183e3c7bde4b359" exitCode=0 Jan 30 00:19:07 crc kubenswrapper[5103]: I0130 00:19:07.492808 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerDied","Data":"222d54563f7ea33a689b0ac58815327cffe1f881b8d06be59183e3c7bde4b359"} Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.498723 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmvfq" event={"ID":"fc2ed764-8df0-4a15-9d66-c2abad3ee367","Type":"ContainerStarted","Data":"453706a1a352476d7e0ea77dcb3ff53e3627f9ddd0b9c0b16a46ad3486167e12"} Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.501652 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gz47" event={"ID":"5fd1ccc1-87a2-43d0-9183-1e907f804a16","Type":"ContainerStarted","Data":"97cea0009b727ca205b8b6934c8bf6828c8dbe09508d3e576e6f37feaf93ced4"} Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.520606 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wmvfq" podStartSLOduration=4.824040318 podStartE2EDuration="5.520585185s" podCreationTimestamp="2026-01-30 00:19:03 +0000 UTC" firstStartedPulling="2026-01-30 00:19:05.452124772 +0000 UTC m=+535.323622824" lastFinishedPulling="2026-01-30 00:19:06.148669639 +0000 UTC m=+536.020167691" observedRunningTime="2026-01-30 00:19:08.515056207 +0000 UTC m=+538.386554279" watchObservedRunningTime="2026-01-30 00:19:08.520585185 +0000 UTC m=+538.392083237" Jan 30 00:19:08 crc kubenswrapper[5103]: I0130 00:19:08.543209 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gz47" podStartSLOduration=3.899925153 podStartE2EDuration="4.54319403s" podCreationTimestamp="2026-01-30 00:19:04 +0000 UTC" firstStartedPulling="2026-01-30 00:19:05.462401959 +0000 UTC m=+535.333900011" lastFinishedPulling="2026-01-30 00:19:06.105670836 +0000 UTC m=+535.977168888" observedRunningTime="2026-01-30 00:19:08.540124373 +0000 UTC m=+538.411622445" watchObservedRunningTime="2026-01-30 00:19:08.54319403 +0000 UTC m=+538.414692082" Jan 30 00:19:11 crc kubenswrapper[5103]: I0130 00:19:11.839875 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:11 crc kubenswrapper[5103]: I0130 00:19:11.840370 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:11 crc kubenswrapper[5103]: I0130 00:19:11.897245 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.438786 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.438830 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.487666 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.564443 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vq6tr" Jan 30 00:19:12 crc kubenswrapper[5103]: I0130 00:19:12.566597 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.221112 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.221771 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.274530 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.599069 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wmvfq" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.834672 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.835122 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:14 crc kubenswrapper[5103]: I0130 00:19:14.888776 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:15 crc kubenswrapper[5103]: I0130 00:19:15.597722 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gz47" Jan 30 00:19:27 crc kubenswrapper[5103]: I0130 00:19:27.499141 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-2wtrh" Jan 30 00:19:27 crc kubenswrapper[5103]: I0130 00:19:27.580905 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:19:52 crc kubenswrapper[5103]: I0130 00:19:52.649679 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" containerID="cri-o://ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3" gracePeriod=30 Jan 30 00:19:52 crc kubenswrapper[5103]: I0130 00:19:52.793088 5103 generic.go:358] "Generic (PLEG): container finished" podID="d69ff998-a349-40e4-8653-bfded7d60952" containerID="ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3" exitCode=0 Jan 30 00:19:52 crc kubenswrapper[5103]: I0130 00:19:52.793218 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerDied","Data":"ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3"} Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.097785 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224326 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224442 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224487 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224747 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224826 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224891 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.224997 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.225291 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"d69ff998-a349-40e4-8653-bfded7d60952\" (UID: \"d69ff998-a349-40e4-8653-bfded7d60952\") " Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.225668 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.225867 5103 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.226485 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.234279 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.235871 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7" (OuterVolumeSpecName: "kube-api-access-plqc7") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "kube-api-access-plqc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.235988 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.236495 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.239307 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.260148 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d69ff998-a349-40e4-8653-bfded7d60952" (UID: "d69ff998-a349-40e4-8653-bfded7d60952"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327270 5103 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327316 5103 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d69ff998-a349-40e4-8653-bfded7d60952-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327336 5103 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327352 5103 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d69ff998-a349-40e4-8653-bfded7d60952-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327370 5103 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d69ff998-a349-40e4-8653-bfded7d60952-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.327387 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plqc7\" (UniqueName: \"kubernetes.io/projected/d69ff998-a349-40e4-8653-bfded7d60952-kube-api-access-plqc7\") on node \"crc\" DevicePath \"\"" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.803749 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.803779 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jfm6p" event={"ID":"d69ff998-a349-40e4-8653-bfded7d60952","Type":"ContainerDied","Data":"4e145232ebdfb182b6a3d1e5a1b96cd199f982d856f76867803b018fe8ea7f1d"} Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.803872 5103 scope.go:117] "RemoveContainer" containerID="ffcde02830ce4ad7b97b4b84ec1411fc924348315e06fb6b2821c02bafdfedc3" Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.858707 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:19:53 crc kubenswrapper[5103]: I0130 00:19:53.869347 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jfm6p"] Jan 30 00:19:54 crc kubenswrapper[5103]: I0130 00:19:54.880847 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69ff998-a349-40e4-8653-bfded7d60952" path="/var/lib/kubelet/pods/d69ff998-a349-40e4-8653-bfded7d60952/volumes" Jan 30 00:19:58 crc kubenswrapper[5103]: I0130 00:19:58.494138 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:19:58 crc kubenswrapper[5103]: I0130 00:19:58.494573 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.147448 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.149849 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.150065 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.150581 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d69ff998-a349-40e4-8653-bfded7d60952" containerName="registry" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.173447 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.173630 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.176066 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.176877 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.178221 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.338748 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"auto-csr-approver-29495540-rtq7h\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.440539 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"auto-csr-approver-29495540-rtq7h\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.479854 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"auto-csr-approver-29495540-rtq7h\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.504088 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.744829 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:20:00 crc kubenswrapper[5103]: I0130 00:20:00.861597 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerStarted","Data":"7f30ca1d97314819f4a96c58426c000df59b6dbd37b58982259d350429341e7d"} Jan 30 00:20:03 crc kubenswrapper[5103]: I0130 00:20:03.889128 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerStarted","Data":"2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60"} Jan 30 00:20:03 crc kubenswrapper[5103]: I0130 00:20:03.905634 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" podStartSLOduration=1.327138092 podStartE2EDuration="3.905618603s" podCreationTimestamp="2026-01-30 00:20:00 +0000 UTC" firstStartedPulling="2026-01-30 00:20:00.752887434 +0000 UTC m=+590.624385496" lastFinishedPulling="2026-01-30 00:20:03.331367955 +0000 UTC m=+593.202866007" observedRunningTime="2026-01-30 00:20:03.903858429 +0000 UTC m=+593.775356481" watchObservedRunningTime="2026-01-30 00:20:03.905618603 +0000 UTC m=+593.777116665" Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.043623 5103 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-5fhxj" Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.068524 5103 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-5fhxj" Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.899220 5103 generic.go:358] "Generic (PLEG): container finished" podID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerID="2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60" exitCode=0 Jan 30 00:20:04 crc kubenswrapper[5103]: I0130 00:20:04.899367 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerDied","Data":"2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60"} Jan 30 00:20:05 crc kubenswrapper[5103]: I0130 00:20:05.069961 5103 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:15:04 +0000 UTC" deadline="2026-02-25 12:32:25.004181789 +0000 UTC" Jan 30 00:20:05 crc kubenswrapper[5103]: I0130 00:20:05.070020 5103 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="636h12m19.934166546s" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.070669 5103 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-01 00:15:04 +0000 UTC" deadline="2026-02-25 12:28:03.634943452 +0000 UTC" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.070733 5103 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="636h7m57.564217027s" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.258766 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.329869 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") pod \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\" (UID: \"d6b2c0b7-a88b-4f50-945a-938210a1c4cc\") " Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.338939 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m" (OuterVolumeSpecName: "kube-api-access-4kn8m") pod "d6b2c0b7-a88b-4f50-945a-938210a1c4cc" (UID: "d6b2c0b7-a88b-4f50-945a-938210a1c4cc"). InnerVolumeSpecName "kube-api-access-4kn8m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.431304 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4kn8m\" (UniqueName: \"kubernetes.io/projected/d6b2c0b7-a88b-4f50-945a-938210a1c4cc-kube-api-access-4kn8m\") on node \"crc\" DevicePath \"\"" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.916294 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" event={"ID":"d6b2c0b7-a88b-4f50-945a-938210a1c4cc","Type":"ContainerDied","Data":"7f30ca1d97314819f4a96c58426c000df59b6dbd37b58982259d350429341e7d"} Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.916363 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f30ca1d97314819f4a96c58426c000df59b6dbd37b58982259d350429341e7d" Jan 30 00:20:06 crc kubenswrapper[5103]: I0130 00:20:06.916369 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495540-rtq7h" Jan 30 00:20:11 crc kubenswrapper[5103]: I0130 00:20:11.180408 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:20:11 crc kubenswrapper[5103]: I0130 00:20:11.180675 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:20:28 crc kubenswrapper[5103]: I0130 00:20:28.493459 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:20:28 crc kubenswrapper[5103]: I0130 00:20:28.494238 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.494141 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.494888 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.494955 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.495999 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.496289 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76" gracePeriod=600 Jan 30 00:20:58 crc kubenswrapper[5103]: I0130 00:20:58.632408 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.279620 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76" exitCode=0 Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.279677 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76"} Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.280164 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730"} Jan 30 00:20:59 crc kubenswrapper[5103]: I0130 00:20:59.280207 5103 scope.go:117] "RemoveContainer" containerID="346d68dc943f95b7c3635e3ca8c695bae2c81b93ca2769fe09d08ce315c33590" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.153753 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.155329 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.155350 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.155482 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" containerName="oc" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.161854 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.162003 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.165883 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.166377 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.166751 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.295026 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"auto-csr-approver-29495542-lzgvl\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.396223 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"auto-csr-approver-29495542-lzgvl\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.430741 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"auto-csr-approver-29495542-lzgvl\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.487571 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:00 crc kubenswrapper[5103]: I0130 00:22:00.718867 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:22:01 crc kubenswrapper[5103]: I0130 00:22:01.719404 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerStarted","Data":"dcdcc879cfd944f8ca59864e7a46850c5adc8d572255862c4e067ddc21b1abfe"} Jan 30 00:22:02 crc kubenswrapper[5103]: I0130 00:22:02.725875 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerStarted","Data":"85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e"} Jan 30 00:22:02 crc kubenswrapper[5103]: I0130 00:22:02.744819 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" podStartSLOduration=1.597656546 podStartE2EDuration="2.744793578s" podCreationTimestamp="2026-01-30 00:22:00 +0000 UTC" firstStartedPulling="2026-01-30 00:22:00.743343804 +0000 UTC m=+710.614841856" lastFinishedPulling="2026-01-30 00:22:01.890480836 +0000 UTC m=+711.761978888" observedRunningTime="2026-01-30 00:22:02.740581784 +0000 UTC m=+712.612079836" watchObservedRunningTime="2026-01-30 00:22:02.744793578 +0000 UTC m=+712.616291630" Jan 30 00:22:03 crc kubenswrapper[5103]: I0130 00:22:03.731643 5103 generic.go:358] "Generic (PLEG): container finished" podID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerID="85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e" exitCode=0 Jan 30 00:22:03 crc kubenswrapper[5103]: I0130 00:22:03.731745 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerDied","Data":"85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e"} Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.022038 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.076386 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") pod \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\" (UID: \"b6eabbd6-7a3e-476d-9412-948faeb44ce2\") " Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.104509 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd" (OuterVolumeSpecName: "kube-api-access-zbwbd") pod "b6eabbd6-7a3e-476d-9412-948faeb44ce2" (UID: "b6eabbd6-7a3e-476d-9412-948faeb44ce2"). InnerVolumeSpecName "kube-api-access-zbwbd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.178155 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zbwbd\" (UniqueName: \"kubernetes.io/projected/b6eabbd6-7a3e-476d-9412-948faeb44ce2-kube-api-access-zbwbd\") on node \"crc\" DevicePath \"\"" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.745698 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" event={"ID":"b6eabbd6-7a3e-476d-9412-948faeb44ce2","Type":"ContainerDied","Data":"dcdcc879cfd944f8ca59864e7a46850c5adc8d572255862c4e067ddc21b1abfe"} Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.745752 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcdcc879cfd944f8ca59864e7a46850c5adc8d572255862c4e067ddc21b1abfe" Jan 30 00:22:05 crc kubenswrapper[5103]: I0130 00:22:05.745830 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495542-lzgvl" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.072069 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.074257 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerName="oc" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.074430 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerName="oc" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.074591 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" containerName="oc" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.083572 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.088561 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.211574 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.211650 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.211680 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.313171 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.313242 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.313358 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.314167 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.314498 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.344872 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"certified-operators-w6lt8\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.407325 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:22:57 crc kubenswrapper[5103]: I0130 00:22:57.640517 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.072951 5103 generic.go:358] "Generic (PLEG): container finished" podID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" exitCode=0 Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.073093 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a"} Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.073153 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerStarted","Data":"6f091d47ee89d8aab7c93e3b02a00d901ef85d7be59ad907b801db6f5ea7772a"} Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.493693 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:22:58 crc kubenswrapper[5103]: I0130 00:22:58.494319 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:22:59 crc kubenswrapper[5103]: I0130 00:22:59.084954 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerStarted","Data":"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88"} Jan 30 00:23:00 crc kubenswrapper[5103]: I0130 00:23:00.096565 5103 generic.go:358] "Generic (PLEG): container finished" podID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" exitCode=0 Jan 30 00:23:00 crc kubenswrapper[5103]: I0130 00:23:00.096773 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88"} Jan 30 00:23:01 crc kubenswrapper[5103]: I0130 00:23:01.105441 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerStarted","Data":"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836"} Jan 30 00:23:01 crc kubenswrapper[5103]: I0130 00:23:01.139507 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6lt8" podStartSLOduration=3.500160568 podStartE2EDuration="4.139480924s" podCreationTimestamp="2026-01-30 00:22:57 +0000 UTC" firstStartedPulling="2026-01-30 00:22:58.07462435 +0000 UTC m=+767.946122442" lastFinishedPulling="2026-01-30 00:22:58.713944736 +0000 UTC m=+768.585442798" observedRunningTime="2026-01-30 00:23:01.133242082 +0000 UTC m=+771.004740164" watchObservedRunningTime="2026-01-30 00:23:01.139480924 +0000 UTC m=+771.010979016" Jan 30 00:23:07 crc kubenswrapper[5103]: I0130 00:23:07.407793 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:07 crc kubenswrapper[5103]: I0130 00:23:07.408253 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:07 crc kubenswrapper[5103]: I0130 00:23:07.478183 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:08 crc kubenswrapper[5103]: I0130 00:23:08.223734 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:08 crc kubenswrapper[5103]: I0130 00:23:08.286154 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.175426 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6lt8" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" containerID="cri-o://f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" gracePeriod=2 Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.635927 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.735332 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") pod \"19842367-c9ea-467c-bd39-d3cd7c857c2b\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.735423 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") pod \"19842367-c9ea-467c-bd39-d3cd7c857c2b\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.735462 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") pod \"19842367-c9ea-467c-bd39-d3cd7c857c2b\" (UID: \"19842367-c9ea-467c-bd39-d3cd7c857c2b\") " Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.737505 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities" (OuterVolumeSpecName: "utilities") pod "19842367-c9ea-467c-bd39-d3cd7c857c2b" (UID: "19842367-c9ea-467c-bd39-d3cd7c857c2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.744924 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4" (OuterVolumeSpecName: "kube-api-access-8ppr4") pod "19842367-c9ea-467c-bd39-d3cd7c857c2b" (UID: "19842367-c9ea-467c-bd39-d3cd7c857c2b"). InnerVolumeSpecName "kube-api-access-8ppr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.795119 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19842367-c9ea-467c-bd39-d3cd7c857c2b" (UID: "19842367-c9ea-467c-bd39-d3cd7c857c2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.837027 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8ppr4\" (UniqueName: \"kubernetes.io/projected/19842367-c9ea-467c-bd39-d3cd7c857c2b-kube-api-access-8ppr4\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.837317 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:10 crc kubenswrapper[5103]: I0130 00:23:10.837444 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19842367-c9ea-467c-bd39-d3cd7c857c2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187192 5103 generic.go:358] "Generic (PLEG): container finished" podID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" exitCode=0 Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187317 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6lt8" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187359 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836"} Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187433 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6lt8" event={"ID":"19842367-c9ea-467c-bd39-d3cd7c857c2b","Type":"ContainerDied","Data":"6f091d47ee89d8aab7c93e3b02a00d901ef85d7be59ad907b801db6f5ea7772a"} Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.187493 5103 scope.go:117] "RemoveContainer" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.224727 5103 scope.go:117] "RemoveContainer" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.229931 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.239773 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6lt8"] Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.255505 5103 scope.go:117] "RemoveContainer" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.288619 5103 scope.go:117] "RemoveContainer" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" Jan 30 00:23:11 crc kubenswrapper[5103]: E0130 00:23:11.289247 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836\": container with ID starting with f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836 not found: ID does not exist" containerID="f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289296 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836"} err="failed to get container status \"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836\": rpc error: code = NotFound desc = could not find container \"f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836\": container with ID starting with f5debbc9f5ca2c173fa31acb998080681f2eca8f1f26d7db2722b3ac48859836 not found: ID does not exist" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289324 5103 scope.go:117] "RemoveContainer" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" Jan 30 00:23:11 crc kubenswrapper[5103]: E0130 00:23:11.289612 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88\": container with ID starting with 8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88 not found: ID does not exist" containerID="8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289754 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88"} err="failed to get container status \"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88\": rpc error: code = NotFound desc = could not find container \"8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88\": container with ID starting with 8bb867fca61ffaf9760bdb6bf376011cc177a0f1c97b18780281370634222f88 not found: ID does not exist" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.289854 5103 scope.go:117] "RemoveContainer" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" Jan 30 00:23:11 crc kubenswrapper[5103]: E0130 00:23:11.290442 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a\": container with ID starting with e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a not found: ID does not exist" containerID="e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a" Jan 30 00:23:11 crc kubenswrapper[5103]: I0130 00:23:11.290493 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a"} err="failed to get container status \"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a\": rpc error: code = NotFound desc = could not find container \"e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a\": container with ID starting with e223ebd3feb39d479f1fef5cd4c62a4b2ee76f432bf3dc2e8cf60a764493825a not found: ID does not exist" Jan 30 00:23:12 crc kubenswrapper[5103]: I0130 00:23:12.888787 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" path="/var/lib/kubelet/pods/19842367-c9ea-467c-bd39-d3cd7c857c2b/volumes" Jan 30 00:23:28 crc kubenswrapper[5103]: I0130 00:23:28.494038 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:23:28 crc kubenswrapper[5103]: I0130 00:23:28.495450 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.247122 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.248225 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" containerID="cri-o://031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.248761 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" containerID="cri-o://6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380038 5103 generic.go:358] "Generic (PLEG): container finished" podID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerID="6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed" exitCode=0 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380093 5103 generic.go:358] "Generic (PLEG): container finished" podID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerID="031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32" exitCode=0 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380082 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerDied","Data":"6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed"} Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.380131 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerDied","Data":"031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32"} Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.449511 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8lwjf"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.449955 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" containerID="cri-o://531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450018 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" containerID="cri-o://2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450060 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" containerID="cri-o://519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450169 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" containerID="cri-o://f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450207 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" containerID="cri-o://7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450188 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.450250 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" containerID="cri-o://f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.479929 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" containerID="cri-o://2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" gracePeriod=30 Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.492033 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.520469 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521190 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-content" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521214 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-content" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521233 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521241 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521250 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521258 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521274 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521281 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521310 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-utilities" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521322 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="extract-utilities" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521455 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="kube-rbac-proxy" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521471 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="19842367-c9ea-467c-bd39-d3cd7c857c2b" containerName="registry-server" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.521487 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" containerName="ovnkube-cluster-manager" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.525663 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575562 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575735 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575791 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575814 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") pod \"7d918c96-a16b-4836-ac5a-83c3388f5468\" (UID: \"7d918c96-a16b-4836-ac5a-83c3388f5468\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.575983 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.576079 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.576136 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv9n\" (UniqueName: \"kubernetes.io/projected/fd448e3b-d40d-4a51-b124-8d2558cece6f-kube-api-access-fnv9n\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.576178 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.577264 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.577296 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.586652 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc" (OuterVolumeSpecName: "kube-api-access-prndc") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "kube-api-access-prndc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.586699 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7d918c96-a16b-4836-ac5a-83c3388f5468" (UID: "7d918c96-a16b-4836-ac5a-83c3388f5468"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677731 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fnv9n\" (UniqueName: \"kubernetes.io/projected/fd448e3b-d40d-4a51-b124-8d2558cece6f-kube-api-access-fnv9n\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677783 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677832 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677905 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677974 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d918c96-a16b-4836-ac5a-83c3388f5468-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.677995 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678011 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d918c96-a16b-4836-ac5a-83c3388f5468-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678025 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-prndc\" (UniqueName: \"kubernetes.io/projected/7d918c96-a16b-4836-ac5a-83c3388f5468-kube-api-access-prndc\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678551 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.678578 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.681333 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd448e3b-d40d-4a51-b124-8d2558cece6f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.696016 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnv9n\" (UniqueName: \"kubernetes.io/projected/fd448e3b-d40d-4a51-b124-8d2558cece6f-kube-api-access-fnv9n\") pod \"ovnkube-control-plane-97c9b6c48-sm6lm\" (UID: \"fd448e3b-d40d-4a51-b124-8d2558cece6f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.720983 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-acl-logging/0.log" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.722608 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-controller/0.log" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.723666 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779405 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779479 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779509 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779511 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779544 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779629 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779685 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779717 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779743 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779791 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779852 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779892 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779950 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.779990 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780032 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780097 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780135 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780205 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780253 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780300 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780321 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780353 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") pod \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\" (UID: \"b3efa2c9-9a52-46ea-b9ad-f708dd386e79\") " Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780379 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log" (OuterVolumeSpecName: "node-log") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780884 5103 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780909 5103 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780927 5103 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.780979 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781021 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781079 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781082 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781148 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781184 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket" (OuterVolumeSpecName: "log-socket") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781276 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781329 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781298 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781364 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781376 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash" (OuterVolumeSpecName: "host-slash") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781785 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.781889 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.782139 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.786603 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.798879 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-525dp"] Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800195 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn" (OuterVolumeSpecName: "kube-api-access-j2mbn") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "kube-api-access-j2mbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800385 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800429 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800457 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800470 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800492 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800506 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800526 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800538 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800555 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800566 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800594 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kubecfg-setup" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800607 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kubecfg-setup" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800638 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800651 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800666 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800702 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800729 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800741 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800901 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-node" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800926 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="northd" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800942 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="sbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800956 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovnkube-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800974 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-acl-logging" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.800992 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="ovn-controller" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.801007 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.801026 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerName="nbdb" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.805692 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b3efa2c9-9a52-46ea-b9ad-f708dd386e79" (UID: "b3efa2c9-9a52-46ea-b9ad-f708dd386e79"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.843756 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882634 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-netns\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882706 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4ps\" (UniqueName: \"kubernetes.io/projected/4f2eeeee-fabb-485c-b725-16a296f58c96-kube-api-access-jr4ps\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882812 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-ovn\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882888 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-node-log\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.882965 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f2eeeee-fabb-485c-b725-16a296f58c96-ovn-node-metrics-cert\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883020 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-kubelet\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883045 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883095 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-systemd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883119 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-etc-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883245 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-slash\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883302 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-systemd-units\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883337 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-script-lib\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883402 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883482 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-config\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883566 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-log-socket\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883603 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-env-overrides\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883710 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-bin\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883755 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-var-lib-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883847 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.883886 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-netd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884133 5103 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884161 5103 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884178 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2mbn\" (UniqueName: \"kubernetes.io/projected/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-kube-api-access-j2mbn\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884196 5103 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884211 5103 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884226 5103 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884241 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884258 5103 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884272 5103 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884286 5103 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884301 5103 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884318 5103 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884333 5103 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884351 5103 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884366 5103 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884381 5103 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.884399 5103 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b3efa2c9-9a52-46ea-b9ad-f708dd386e79-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.897732 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985853 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-log-socket\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985910 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-env-overrides\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985945 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-bin\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.985970 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-var-lib-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986023 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986027 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-log-socket\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986043 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-netd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986131 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-netd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986196 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986296 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-netns\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986331 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4ps\" (UniqueName: \"kubernetes.io/projected/4f2eeeee-fabb-485c-b725-16a296f58c96-kube-api-access-jr4ps\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986361 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-ovn\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986393 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-node-log\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986420 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f2eeeee-fabb-485c-b725-16a296f58c96-ovn-node-metrics-cert\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986462 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-kubelet\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986484 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986537 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-systemd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986574 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-etc-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986608 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-slash\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986629 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-systemd-units\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986649 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-script-lib\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986665 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-cni-bin\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986714 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-ovn\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986480 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-var-lib-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986682 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986887 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-env-overrides\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.986964 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.987149 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-run-netns\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.987493 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-node-log\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988153 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-kubelet\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988249 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988415 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-run-systemd\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988582 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-etc-openvswitch\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988642 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-host-slash\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988691 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f2eeeee-fabb-485c-b725-16a296f58c96-systemd-units\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.988833 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-config\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.989716 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-config\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.989801 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f2eeeee-fabb-485c-b725-16a296f58c96-ovnkube-script-lib\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:34 crc kubenswrapper[5103]: I0130 00:23:34.992743 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f2eeeee-fabb-485c-b725-16a296f58c96-ovn-node-metrics-cert\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.016101 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4ps\" (UniqueName: \"kubernetes.io/projected/4f2eeeee-fabb-485c-b725-16a296f58c96-kube-api-access-jr4ps\") pod \"ovnkube-node-525dp\" (UID: \"4f2eeeee-fabb-485c-b725-16a296f58c96\") " pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.161152 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:35 crc kubenswrapper[5103]: W0130 00:23:35.182370 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2eeeee_fabb_485c_b725_16a296f58c96.slice/crio-7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32 WatchSource:0}: Error finding container 7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32: Status 404 returned error can't find the container with id 7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.391093 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.391172 5103 generic.go:358] "Generic (PLEG): container finished" podID="a7dd7e02-4357-4643-8c23-2fb57ba70405" containerID="1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb" exitCode=2 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.391344 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerDied","Data":"1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.392852 5103 scope.go:117] "RemoveContainer" containerID="1924d7799e7a22d8b03bdfa9e3bf703744981a899ee974cc86920ae8c5fcbbcb" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.393299 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" event={"ID":"fd448e3b-d40d-4a51-b124-8d2558cece6f","Type":"ContainerStarted","Data":"800548153d9e6aba1afb85f182785a661eee618e7b42f4fe127d860272336e95"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.399650 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-acl-logging/0.log" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.400418 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8lwjf_b3efa2c9-9a52-46ea-b9ad-f708dd386e79/ovn-controller/0.log" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401723 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401752 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401760 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401768 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401775 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401782 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" exitCode=0 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401789 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" exitCode=143 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401799 5103 generic.go:358] "Generic (PLEG): container finished" podID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" exitCode=143 Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401893 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401925 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401937 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401947 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401955 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401964 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401976 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401984 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401989 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.401996 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402004 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402010 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402015 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402020 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402025 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402029 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402034 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402039 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402065 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402075 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402085 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402092 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402098 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402102 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402107 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402112 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402117 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402121 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402125 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402132 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" event={"ID":"b3efa2c9-9a52-46ea-b9ad-f708dd386e79","Type":"ContainerDied","Data":"38221fc62e1b3d592b338664053e425c486a6c0fa3cf8ead449229dbfc4659da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402138 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402143 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402148 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402152 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402157 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402162 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402167 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402171 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402175 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402189 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.402372 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8lwjf" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.406932 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" event={"ID":"7d918c96-a16b-4836-ac5a-83c3388f5468","Type":"ContainerDied","Data":"578d2296c0b9b147f002bab00ce887ae174a1dfc57c08f5d70b218ff4df99c74"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.406956 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.406965 5103 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.407035 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.408499 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"7ad003217c531c6d61e56d56975c8a661947f0a8082492729720fca36728bd32"} Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.455218 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.458562 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8lwjf"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.472469 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8lwjf"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.477941 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.481958 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k7mv6"] Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.493278 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.543291 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.560357 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.581953 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.600456 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.618490 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.633533 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.649582 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.651330 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.652037 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.652272 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.655167 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.655224 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.655254 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.655861 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.655912 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.656161 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.656650 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.656672 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.656686 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.657174 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657200 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657214 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.657681 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657723 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.657751 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.658391 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658412 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658427 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.658854 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658877 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.658895 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: E0130 00:23:35.659138 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659162 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659177 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659511 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.659533 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660006 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660041 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660331 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660355 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660611 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660640 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660870 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.660888 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661067 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661090 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661272 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661294 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661491 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661511 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661723 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661766 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.661999 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662021 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662201 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662250 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662496 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662520 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662749 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662776 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.662981 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663003 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663258 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663276 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663486 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663499 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663827 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.663849 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.664678 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.664703 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665029 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665064 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665345 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665380 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665758 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.665776 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666026 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666062 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666430 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666450 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666700 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666719 5103 scope.go:117] "RemoveContainer" containerID="7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666883 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b"} err="failed to get container status \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": rpc error: code = NotFound desc = could not find container \"7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b\": container with ID starting with 7871800748abc7cf825e9ef97d61dfd4b3bbba7352c8406fc5449074a670172b not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.666899 5103 scope.go:117] "RemoveContainer" containerID="531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667065 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c"} err="failed to get container status \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": rpc error: code = NotFound desc = could not find container \"531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c\": container with ID starting with 531ed8c2d855e0b3cc43672d44f3ececad91bbac9e662bb5f01419f30478f86c not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667085 5103 scope.go:117] "RemoveContainer" containerID="2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667286 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da"} err="failed to get container status \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": rpc error: code = NotFound desc = could not find container \"2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da\": container with ID starting with 2560d435a9d30bd26cbeb02d2171a5db4e95dff591066c7984b0e3b8e49046da not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667302 5103 scope.go:117] "RemoveContainer" containerID="2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667480 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06"} err="failed to get container status \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": rpc error: code = NotFound desc = could not find container \"2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06\": container with ID starting with 2eb4c575878b358906af5393843dd9af31e51adb4febdf8d8be492629a636a06 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667496 5103 scope.go:117] "RemoveContainer" containerID="519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667728 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8"} err="failed to get container status \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": rpc error: code = NotFound desc = could not find container \"519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8\": container with ID starting with 519ca70c3f44fd9615daf002b14770e98b3f4f4b6fda2ef74812c9ec1390cfd8 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667742 5103 scope.go:117] "RemoveContainer" containerID="f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667906 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea"} err="failed to get container status \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": rpc error: code = NotFound desc = could not find container \"f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea\": container with ID starting with f06d603828e7d20e68baad760dec48640d78e5e5d7bda3a6b461f111b034bdea not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.667921 5103 scope.go:117] "RemoveContainer" containerID="2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668187 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087"} err="failed to get container status \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": rpc error: code = NotFound desc = could not find container \"2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087\": container with ID starting with 2337fc6799252dd95a7e97972f3d8bf28170f45300692097cd1ac4a1945e3087 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668227 5103 scope.go:117] "RemoveContainer" containerID="5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668417 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0"} err="failed to get container status \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": rpc error: code = NotFound desc = could not find container \"5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0\": container with ID starting with 5e07d147c8563bf8f606135cd057663ff4335e07047fa2ec60ef4a2c66df32a0 not found: ID does not exist" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668442 5103 scope.go:117] "RemoveContainer" containerID="f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e" Jan 30 00:23:35 crc kubenswrapper[5103]: I0130 00:23:35.668632 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e"} err="failed to get container status \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": rpc error: code = NotFound desc = could not find container \"f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e\": container with ID starting with f98a46f6cc4ea438c1b1283e2c43486f7872a7f87aa9c4105e2cf13d2f7b886e not found: ID does not exist" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.444291 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" event={"ID":"fd448e3b-d40d-4a51-b124-8d2558cece6f","Type":"ContainerStarted","Data":"1c96f4b0c1dc88063fcdd170ca416360f8b21df2d89cb589689443128774a010"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.444380 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" event={"ID":"fd448e3b-d40d-4a51-b124-8d2558cece6f","Type":"ContainerStarted","Data":"3ff669de89dd11b86c8c6ade1f21eb1b843c4ec83d3c7a3bc086f0faf8f660c6"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.452403 5103 generic.go:358] "Generic (PLEG): container finished" podID="4f2eeeee-fabb-485c-b725-16a296f58c96" containerID="5480578daefef342b60440da2a8c82fa7379571f14bda252e4eacbdfce4267a0" exitCode=0 Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.452552 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerDied","Data":"5480578daefef342b60440da2a8c82fa7379571f14bda252e4eacbdfce4267a0"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.455441 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.455649 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swfns" event={"ID":"a7dd7e02-4357-4643-8c23-2fb57ba70405","Type":"ContainerStarted","Data":"1c3b59e2cda1f03dc4a6b2af74a2dd9b717de4547f0c7cdd9d896b9db0816d37"} Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.513023 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-sm6lm" podStartSLOduration=2.513000987 podStartE2EDuration="2.513000987s" podCreationTimestamp="2026-01-30 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:23:36.470857476 +0000 UTC m=+806.342355558" watchObservedRunningTime="2026-01-30 00:23:36.513000987 +0000 UTC m=+806.384499049" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.875953 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d918c96-a16b-4836-ac5a-83c3388f5468" path="/var/lib/kubelet/pods/7d918c96-a16b-4836-ac5a-83c3388f5468/volumes" Jan 30 00:23:36 crc kubenswrapper[5103]: I0130 00:23:36.877382 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3efa2c9-9a52-46ea-b9ad-f708dd386e79" path="/var/lib/kubelet/pods/b3efa2c9-9a52-46ea-b9ad-f708dd386e79/volumes" Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465597 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"a59c50deadb71ab7529cc5235f6ee78a3c451b9366c7c77494c25cc29398ddb0"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465675 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"b47f9ba5bf6e0ba2f24dd97c6fdd8582b956b05be899f6b7ea707a991e241426"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465704 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"9798f41598c26fb1d9c36d1f7f8062236cd58860a21e8d013597dfe6fc4f0428"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465727 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"d7e03f3bde92d63378ee648779340aa81bb05e0bbaf3a0c48063217217861704"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465750 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"4f0932da209b17bd45b79dab32b319588d7f4d5201dbe532ff9b9d0992d37a00"} Jan 30 00:23:37 crc kubenswrapper[5103]: I0130 00:23:37.465775 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"12c0b36156e64212436c1448eadd7f9d77ed9daba09018cfb0395e91e3dd6d81"} Jan 30 00:23:40 crc kubenswrapper[5103]: I0130 00:23:40.488820 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"1db3989ac62b8fa21c41ef8d83db7024b90e0a927a18b100cbb4b74ce8efb6ec"} Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.508362 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" event={"ID":"4f2eeeee-fabb-485c-b725-16a296f58c96","Type":"ContainerStarted","Data":"d2cec7bfc7e0d791f3c154776f751f855d988fffd40f27993eb89f9c299868a4"} Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.508998 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.509016 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.509026 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.536843 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.539584 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:23:42 crc kubenswrapper[5103]: I0130 00:23:42.541583 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" podStartSLOduration=8.541566658 podStartE2EDuration="8.541566658s" podCreationTimestamp="2026-01-30 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:23:42.538882653 +0000 UTC m=+812.410380715" watchObservedRunningTime="2026-01-30 00:23:42.541566658 +0000 UTC m=+812.413064730" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.493032 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.494032 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.494148 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.495182 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:23:58 crc kubenswrapper[5103]: I0130 00:23:58.495308 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730" gracePeriod=600 Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.635421 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730" exitCode=0 Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.635519 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730"} Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.637757 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5"} Jan 30 00:23:59 crc kubenswrapper[5103]: I0130 00:23:59.637800 5103 scope.go:117] "RemoveContainer" containerID="399cda3a0f0aa765b5f32eacaf816dc8466c112e0b2d2cfeb27afa2df61ade76" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.144534 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.151441 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.156622 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.159334 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.160858 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.161012 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.257469 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"auto-csr-approver-29495544-kj6vw\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.358840 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"auto-csr-approver-29495544-kj6vw\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.396356 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"auto-csr-approver-29495544-kj6vw\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.469327 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:00 crc kubenswrapper[5103]: I0130 00:24:00.707164 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:24:00 crc kubenswrapper[5103]: W0130 00:24:00.716313 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ad58695_120d_466b_bec0_3198637da77d.slice/crio-7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257 WatchSource:0}: Error finding container 7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257: Status 404 returned error can't find the container with id 7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257 Jan 30 00:24:01 crc kubenswrapper[5103]: I0130 00:24:01.654850 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" event={"ID":"5ad58695-120d-466b-bec0-3198637da77d","Type":"ContainerStarted","Data":"7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257"} Jan 30 00:24:02 crc kubenswrapper[5103]: I0130 00:24:02.664396 5103 generic.go:358] "Generic (PLEG): container finished" podID="5ad58695-120d-466b-bec0-3198637da77d" containerID="cc6d50dd8cf2d79869118c21971c35ee57934965ea393fbb5dc64b460746ac0e" exitCode=0 Jan 30 00:24:02 crc kubenswrapper[5103]: I0130 00:24:02.664534 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" event={"ID":"5ad58695-120d-466b-bec0-3198637da77d","Type":"ContainerDied","Data":"cc6d50dd8cf2d79869118c21971c35ee57934965ea393fbb5dc64b460746ac0e"} Jan 30 00:24:03 crc kubenswrapper[5103]: I0130 00:24:03.985977 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.112187 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") pod \"5ad58695-120d-466b-bec0-3198637da77d\" (UID: \"5ad58695-120d-466b-bec0-3198637da77d\") " Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.121432 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28" (OuterVolumeSpecName: "kube-api-access-xfd28") pod "5ad58695-120d-466b-bec0-3198637da77d" (UID: "5ad58695-120d-466b-bec0-3198637da77d"). InnerVolumeSpecName "kube-api-access-xfd28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.215351 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfd28\" (UniqueName: \"kubernetes.io/projected/5ad58695-120d-466b-bec0-3198637da77d-kube-api-access-xfd28\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.680600 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" event={"ID":"5ad58695-120d-466b-bec0-3198637da77d","Type":"ContainerDied","Data":"7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257"} Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.680650 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495544-kj6vw" Jan 30 00:24:04 crc kubenswrapper[5103]: I0130 00:24:04.680669 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7772c6906efa6b132de6eed89a2bcf6e9224bd50777e9266ced80722fb99c257" Jan 30 00:24:11 crc kubenswrapper[5103]: I0130 00:24:11.712086 5103 scope.go:117] "RemoveContainer" containerID="6a7fd1995b6a8a171d0e40fefb833585836e0fd2f26cee4929db65c67c2020ed" Jan 30 00:24:11 crc kubenswrapper[5103]: I0130 00:24:11.753195 5103 scope.go:117] "RemoveContainer" containerID="031b700fa5d16bdcf217e427d22bdb4e9375aa8d5d6a6527aa3dbb074dc44b32" Jan 30 00:24:14 crc kubenswrapper[5103]: I0130 00:24:14.560244 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-525dp" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.456585 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.461531 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ad58695-120d-466b-bec0-3198637da77d" containerName="oc" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.461562 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad58695-120d-466b-bec0-3198637da77d" containerName="oc" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.461720 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ad58695-120d-466b-bec0-3198637da77d" containerName="oc" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.635860 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.636010 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.703003 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.703091 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.703309 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.804309 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.804377 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.804415 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.805178 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.805189 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.840523 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"community-operators-cdjcm\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:29 crc kubenswrapper[5103]: I0130 00:24:29.950598 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.195590 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.866231 5103 generic.go:358] "Generic (PLEG): container finished" podID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" exitCode=0 Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.866509 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8"} Jan 30 00:24:30 crc kubenswrapper[5103]: I0130 00:24:30.867258 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerStarted","Data":"e98b3f20b6b07b3b4311f68953e87f12a1d1c55d1f4c76c02c7e9c2872921338"} Jan 30 00:24:31 crc kubenswrapper[5103]: I0130 00:24:31.876872 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerStarted","Data":"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e"} Jan 30 00:24:32 crc kubenswrapper[5103]: I0130 00:24:32.884704 5103 generic.go:358] "Generic (PLEG): container finished" podID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" exitCode=0 Jan 30 00:24:32 crc kubenswrapper[5103]: I0130 00:24:32.884835 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e"} Jan 30 00:24:33 crc kubenswrapper[5103]: I0130 00:24:33.902661 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerStarted","Data":"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7"} Jan 30 00:24:33 crc kubenswrapper[5103]: I0130 00:24:33.933559 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cdjcm" podStartSLOduration=4.380336716 podStartE2EDuration="4.9335355s" podCreationTimestamp="2026-01-30 00:24:29 +0000 UTC" firstStartedPulling="2026-01-30 00:24:30.868039029 +0000 UTC m=+860.739537121" lastFinishedPulling="2026-01-30 00:24:31.421237813 +0000 UTC m=+861.292735905" observedRunningTime="2026-01-30 00:24:33.930230019 +0000 UTC m=+863.801728141" watchObservedRunningTime="2026-01-30 00:24:33.9335355 +0000 UTC m=+863.805033562" Jan 30 00:24:39 crc kubenswrapper[5103]: I0130 00:24:39.951356 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:39 crc kubenswrapper[5103]: I0130 00:24:39.951734 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:40 crc kubenswrapper[5103]: I0130 00:24:40.017131 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:40 crc kubenswrapper[5103]: I0130 00:24:40.999724 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:41 crc kubenswrapper[5103]: I0130 00:24:41.058773 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:42 crc kubenswrapper[5103]: I0130 00:24:42.962861 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cdjcm" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" containerID="cri-o://fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" gracePeriod=2 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.110866 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.111908 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-29m6m" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" containerID="cri-o://445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6" gracePeriod=30 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.910473 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.970950 5103 generic.go:358] "Generic (PLEG): container finished" podID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" exitCode=0 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971168 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7"} Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971219 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdjcm" event={"ID":"41134658-93eb-415b-b6ac-9d0a73083d6a","Type":"ContainerDied","Data":"e98b3f20b6b07b3b4311f68953e87f12a1d1c55d1f4c76c02c7e9c2872921338"} Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971176 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdjcm" Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.971243 5103 scope.go:117] "RemoveContainer" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.976078 5103 generic.go:358] "Generic (PLEG): container finished" podID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerID="445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6" exitCode=0 Jan 30 00:24:43 crc kubenswrapper[5103]: I0130 00:24:43.976220 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6"} Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.001953 5103 scope.go:117] "RemoveContainer" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.025430 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") pod \"41134658-93eb-415b-b6ac-9d0a73083d6a\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.025552 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") pod \"41134658-93eb-415b-b6ac-9d0a73083d6a\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.025651 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") pod \"41134658-93eb-415b-b6ac-9d0a73083d6a\" (UID: \"41134658-93eb-415b-b6ac-9d0a73083d6a\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.027363 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities" (OuterVolumeSpecName: "utilities") pod "41134658-93eb-415b-b6ac-9d0a73083d6a" (UID: "41134658-93eb-415b-b6ac-9d0a73083d6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.028515 5103 scope.go:117] "RemoveContainer" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.032015 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f" (OuterVolumeSpecName: "kube-api-access-l7q6f") pod "41134658-93eb-415b-b6ac-9d0a73083d6a" (UID: "41134658-93eb-415b-b6ac-9d0a73083d6a"). InnerVolumeSpecName "kube-api-access-l7q6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.046503 5103 scope.go:117] "RemoveContainer" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" Jan 30 00:24:44 crc kubenswrapper[5103]: E0130 00:24:44.046875 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7\": container with ID starting with fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7 not found: ID does not exist" containerID="fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.046914 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7"} err="failed to get container status \"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7\": rpc error: code = NotFound desc = could not find container \"fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7\": container with ID starting with fb2c49a5dfce80b84e8239111a841058096b04ebfce2bc63076d6f3050a25bc7 not found: ID does not exist" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.046938 5103 scope.go:117] "RemoveContainer" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" Jan 30 00:24:44 crc kubenswrapper[5103]: E0130 00:24:44.047203 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e\": container with ID starting with c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e not found: ID does not exist" containerID="c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.047231 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e"} err="failed to get container status \"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e\": rpc error: code = NotFound desc = could not find container \"c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e\": container with ID starting with c8e539d4b66e0898073d1cd2e2b71914c93e8d167398dba57d3a00be277a127e not found: ID does not exist" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.047251 5103 scope.go:117] "RemoveContainer" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" Jan 30 00:24:44 crc kubenswrapper[5103]: E0130 00:24:44.047500 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8\": container with ID starting with 8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8 not found: ID does not exist" containerID="8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.047525 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8"} err="failed to get container status \"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8\": rpc error: code = NotFound desc = could not find container \"8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8\": container with ID starting with 8c788fa4ac72470da418344d2e32aa42ab07ac5159383e1a8a9b941436c172e8 not found: ID does not exist" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.079002 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41134658-93eb-415b-b6ac-9d0a73083d6a" (UID: "41134658-93eb-415b-b6ac-9d0a73083d6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.127635 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.127667 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41134658-93eb-415b-b6ac-9d0a73083d6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.127676 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l7q6f\" (UniqueName: \"kubernetes.io/projected/41134658-93eb-415b-b6ac-9d0a73083d6a-kube-api-access-l7q6f\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.170719 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.228961 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") pod \"3c68a080-5bee-4c96-8683-dfbc9187c20f\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.229147 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") pod \"3c68a080-5bee-4c96-8683-dfbc9187c20f\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.229217 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") pod \"3c68a080-5bee-4c96-8683-dfbc9187c20f\" (UID: \"3c68a080-5bee-4c96-8683-dfbc9187c20f\") " Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.230460 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities" (OuterVolumeSpecName: "utilities") pod "3c68a080-5bee-4c96-8683-dfbc9187c20f" (UID: "3c68a080-5bee-4c96-8683-dfbc9187c20f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.234350 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p" (OuterVolumeSpecName: "kube-api-access-6wk6p") pod "3c68a080-5bee-4c96-8683-dfbc9187c20f" (UID: "3c68a080-5bee-4c96-8683-dfbc9187c20f"). InnerVolumeSpecName "kube-api-access-6wk6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.257133 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c68a080-5bee-4c96-8683-dfbc9187c20f" (UID: "3c68a080-5bee-4c96-8683-dfbc9187c20f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.304613 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.313550 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cdjcm"] Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.330648 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wk6p\" (UniqueName: \"kubernetes.io/projected/3c68a080-5bee-4c96-8683-dfbc9187c20f-kube-api-access-6wk6p\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.330674 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.330684 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c68a080-5bee-4c96-8683-dfbc9187c20f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.875026 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" path="/var/lib/kubelet/pods/41134658-93eb-415b-b6ac-9d0a73083d6a/volumes" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.990628 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29m6m" event={"ID":"3c68a080-5bee-4c96-8683-dfbc9187c20f","Type":"ContainerDied","Data":"93eda4d031aed494c523ca77f1e91f142fd42bf9c41c24e7b6cc12d812375e6e"} Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.990716 5103 scope.go:117] "RemoveContainer" containerID="445648755df7aa746d13412d63bc4c92d3a18d86920e1a4192ac33176f6aa9d6" Jan 30 00:24:44 crc kubenswrapper[5103]: I0130 00:24:44.990787 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29m6m" Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.021116 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.027324 5103 scope.go:117] "RemoveContainer" containerID="e28c324607a0aa3b715230dc818fcdca18f72d1a3d44777010087b06d0384ded" Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.029219 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-29m6m"] Jan 30 00:24:45 crc kubenswrapper[5103]: I0130 00:24:45.044360 5103 scope.go:117] "RemoveContainer" containerID="44a4d6d1f7b80ae12b217c95d1dbfec630c58aa07e5059535d601fbdbef544c4" Jan 30 00:24:46 crc kubenswrapper[5103]: I0130 00:24:46.874694 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" path="/var/lib/kubelet/pods/3c68a080-5bee-4c96-8683-dfbc9187c20f/volumes" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.819847 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj"] Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821671 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821704 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821729 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821741 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821757 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821771 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821814 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821826 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-utilities" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821846 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821859 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821874 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.821886 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="extract-content" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.822096 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="41134658-93eb-415b-b6ac-9d0a73083d6a" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.822124 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="3c68a080-5bee-4c96-8683-dfbc9187c20f" containerName="registry-server" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.832715 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj"] Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.832920 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.840635 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.891520 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.891691 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.891723 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.993954 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.994234 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.994307 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.995019 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:48 crc kubenswrapper[5103]: I0130 00:24:48.995020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:49 crc kubenswrapper[5103]: I0130 00:24:49.053110 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:49 crc kubenswrapper[5103]: I0130 00:24:49.160477 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:49 crc kubenswrapper[5103]: I0130 00:24:49.432628 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj"] Jan 30 00:24:50 crc kubenswrapper[5103]: I0130 00:24:50.044751 5103 generic.go:358] "Generic (PLEG): container finished" podID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerID="6eac2d9439de32ff558430295827cf834d20212b102dcb1cbad169ab2ebd4e6b" exitCode=0 Jan 30 00:24:50 crc kubenswrapper[5103]: I0130 00:24:50.045210 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"6eac2d9439de32ff558430295827cf834d20212b102dcb1cbad169ab2ebd4e6b"} Jan 30 00:24:50 crc kubenswrapper[5103]: I0130 00:24:50.045276 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerStarted","Data":"6c4c13447d1abea83e5e39c862df4c70f05d3b0ffc7b4b85c3f136e1edd83444"} Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.062799 5103 generic.go:358] "Generic (PLEG): container finished" podID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerID="7df1ea0921bac385d4e348342f6a864cf1c38f8272c1fa6d930dec98940f8ec8" exitCode=0 Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.062901 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"7df1ea0921bac385d4e348342f6a864cf1c38f8272c1fa6d930dec98940f8ec8"} Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.368842 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.380616 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.385307 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.447703 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.447895 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.448015 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.548691 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.548946 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.548982 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.549392 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.549863 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.573060 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"redhat-operators-nv4qh\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.697568 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:24:52 crc kubenswrapper[5103]: I0130 00:24:52.879987 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.069211 5103 generic.go:358] "Generic (PLEG): container finished" podID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerID="aef55cb05bf11143f58f8e0b8e055586faee2995d04fc7874acb3b506132512f" exitCode=0 Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.069288 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"aef55cb05bf11143f58f8e0b8e055586faee2995d04fc7874acb3b506132512f"} Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.070810 5103 generic.go:358] "Generic (PLEG): container finished" podID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" exitCode=0 Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.070928 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89"} Jan 30 00:24:53 crc kubenswrapper[5103]: I0130 00:24:53.070955 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerStarted","Data":"025adefaf8142ce355a7aa90e0f2747b128b7c7fc3858dd12fcfec2adb94ac75"} Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.090798 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerStarted","Data":"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75"} Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.296400 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.371549 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") pod \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.371689 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") pod \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.371715 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") pod \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\" (UID: \"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e\") " Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.373829 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle" (OuterVolumeSpecName: "bundle") pod "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" (UID: "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.385227 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x" (OuterVolumeSpecName: "kube-api-access-x7x2x") pod "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" (UID: "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e"). InnerVolumeSpecName "kube-api-access-x7x2x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.385388 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util" (OuterVolumeSpecName: "util") pod "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" (UID: "34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.472675 5103 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.472708 5103 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:54 crc kubenswrapper[5103]: I0130 00:24:54.472717 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x7x2x\" (UniqueName: \"kubernetes.io/projected/34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e-kube-api-access-x7x2x\") on node \"crc\" DevicePath \"\"" Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.101222 5103 generic.go:358] "Generic (PLEG): container finished" podID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" exitCode=0 Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.101316 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75"} Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.107422 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" event={"ID":"34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e","Type":"ContainerDied","Data":"6c4c13447d1abea83e5e39c862df4c70f05d3b0ffc7b4b85c3f136e1edd83444"} Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.107506 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4c13447d1abea83e5e39c862df4c70f05d3b0ffc7b4b85c3f136e1edd83444" Jan 30 00:24:55 crc kubenswrapper[5103]: I0130 00:24:55.107453 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015031 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc"] Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015647 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="util" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015666 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="util" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015678 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="pull" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015685 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="pull" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015704 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="extract" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015713 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="extract" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.015821 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e" containerName="extract" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.027259 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc"] Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.027413 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.029787 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.093895 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.094178 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.094291 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tdf\" (UniqueName: \"kubernetes.io/projected/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-kube-api-access-r4tdf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.114369 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerStarted","Data":"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8"} Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.138033 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nv4qh" podStartSLOduration=3.460727133 podStartE2EDuration="4.138012092s" podCreationTimestamp="2026-01-30 00:24:52 +0000 UTC" firstStartedPulling="2026-01-30 00:24:53.071596788 +0000 UTC m=+882.943094840" lastFinishedPulling="2026-01-30 00:24:53.748881747 +0000 UTC m=+883.620379799" observedRunningTime="2026-01-30 00:24:56.13183446 +0000 UTC m=+886.003332532" watchObservedRunningTime="2026-01-30 00:24:56.138012092 +0000 UTC m=+886.009510154" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.195922 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tdf\" (UniqueName: \"kubernetes.io/projected/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-kube-api-access-r4tdf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.196086 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.196196 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.197327 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.197671 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.234287 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tdf\" (UniqueName: \"kubernetes.io/projected/969009ac-f9ae-48c0-b45e-bf9a5844b7ff-kube-api-access-r4tdf\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc\" (UID: \"969009ac-f9ae-48c0-b45e-bf9a5844b7ff\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.350422 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" Jan 30 00:24:56 crc kubenswrapper[5103]: I0130 00:24:56.590256 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc"] Jan 30 00:24:56 crc kubenswrapper[5103]: W0130 00:24:56.600108 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod969009ac_f9ae_48c0_b45e_bf9a5844b7ff.slice/crio-162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed WatchSource:0}: Error finding container 162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed: Status 404 returned error can't find the container with id 162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.122989 5103 generic.go:358] "Generic (PLEG): container finished" podID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" containerID="1ad6d275bb45cd7dd78c0284bea8eeb19469eca12ddc818acd7996f928a2d92e" exitCode=0 Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.123236 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" event={"ID":"969009ac-f9ae-48c0-b45e-bf9a5844b7ff","Type":"ContainerDied","Data":"1ad6d275bb45cd7dd78c0284bea8eeb19469eca12ddc818acd7996f928a2d92e"} Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.123320 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" event={"ID":"969009ac-f9ae-48c0-b45e-bf9a5844b7ff","Type":"ContainerStarted","Data":"162c3712c0567f536e54e392adcf51d6a5e7ae9780c2a1c1bf9b26bb945576ed"} Jan 30 00:24:57 crc kubenswrapper[5103]: E0130 00:24:57.363830 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:24:57 crc kubenswrapper[5103]: E0130 00:24:57.364237 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:24:57 crc kubenswrapper[5103]: E0130 00:24:57.365477 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.401332 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg"] Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.405757 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.413310 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg"] Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.513273 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.513328 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.513403 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.614678 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.614854 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.614919 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.615831 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.616020 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.639208 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.721612 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:24:57 crc kubenswrapper[5103]: I0130 00:24:57.953480 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg"] Jan 30 00:24:57 crc kubenswrapper[5103]: W0130 00:24:57.963274 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1decb0e_49d8_404d_966d_b8249754982f.slice/crio-d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844 WatchSource:0}: Error finding container d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844: Status 404 returned error can't find the container with id d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844 Jan 30 00:24:58 crc kubenswrapper[5103]: I0130 00:24:58.129070 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerStarted","Data":"34ee1b14394c9fb1a3cb58d8e258a9f8b5b6440432110fd6bb5f2c26f0b50abc"} Jan 30 00:24:58 crc kubenswrapper[5103]: I0130 00:24:58.130475 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerStarted","Data":"d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844"} Jan 30 00:24:58 crc kubenswrapper[5103]: E0130 00:24:58.131841 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:24:59 crc kubenswrapper[5103]: I0130 00:24:59.138243 5103 generic.go:358] "Generic (PLEG): container finished" podID="b1decb0e-49d8-404d-966d-b8249754982f" containerID="34ee1b14394c9fb1a3cb58d8e258a9f8b5b6440432110fd6bb5f2c26f0b50abc" exitCode=0 Jan 30 00:24:59 crc kubenswrapper[5103]: I0130 00:24:59.138312 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"34ee1b14394c9fb1a3cb58d8e258a9f8b5b6440432110fd6bb5f2c26f0b50abc"} Jan 30 00:25:02 crc kubenswrapper[5103]: I0130 00:25:02.698173 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:02 crc kubenswrapper[5103]: I0130 00:25:02.698223 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:02 crc kubenswrapper[5103]: I0130 00:25:02.774538 5103 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:03 crc kubenswrapper[5103]: I0130 00:25:03.226868 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:04 crc kubenswrapper[5103]: I0130 00:25:04.349656 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:25:05 crc kubenswrapper[5103]: I0130 00:25:05.172393 5103 generic.go:358] "Generic (PLEG): container finished" podID="b1decb0e-49d8-404d-966d-b8249754982f" containerID="ad869cf96bc9dddcccbe1599fb46df8956db41181ed889ba6b3358c30d513e6f" exitCode=0 Jan 30 00:25:05 crc kubenswrapper[5103]: I0130 00:25:05.172454 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"ad869cf96bc9dddcccbe1599fb46df8956db41181ed889ba6b3358c30d513e6f"} Jan 30 00:25:05 crc kubenswrapper[5103]: I0130 00:25:05.173127 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nv4qh" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" containerID="cri-o://305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" gracePeriod=2 Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.034435 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.142610 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") pod \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.142729 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") pod \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.142802 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") pod \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\" (UID: \"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21\") " Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.143502 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities" (OuterVolumeSpecName: "utilities") pod "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" (UID: "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.162119 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw" (OuterVolumeSpecName: "kube-api-access-6vnkw") pod "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" (UID: "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21"). InnerVolumeSpecName "kube-api-access-6vnkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.183610 5103 generic.go:358] "Generic (PLEG): container finished" podID="b1decb0e-49d8-404d-966d-b8249754982f" containerID="bb60d5dd31f94af481701fffe7f1bd08115c8eff923b0dbe231d6e93cf2d86ce" exitCode=0 Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.183691 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"bb60d5dd31f94af481701fffe7f1bd08115c8eff923b0dbe231d6e93cf2d86ce"} Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185743 5103 generic.go:358] "Generic (PLEG): container finished" podID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" exitCode=0 Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185896 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8"} Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185917 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv4qh" event={"ID":"cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21","Type":"ContainerDied","Data":"025adefaf8142ce355a7aa90e0f2747b128b7c7fc3858dd12fcfec2adb94ac75"} Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.185934 5103 scope.go:117] "RemoveContainer" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.186101 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv4qh" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.212224 5103 scope.go:117] "RemoveContainer" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.232267 5103 scope.go:117] "RemoveContainer" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.244068 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6vnkw\" (UniqueName: \"kubernetes.io/projected/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-kube-api-access-6vnkw\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.244099 5103 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.254456 5103 scope.go:117] "RemoveContainer" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" Jan 30 00:25:06 crc kubenswrapper[5103]: E0130 00:25:06.254821 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8\": container with ID starting with 305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8 not found: ID does not exist" containerID="305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.254855 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8"} err="failed to get container status \"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8\": rpc error: code = NotFound desc = could not find container \"305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8\": container with ID starting with 305e8009a4c9f956de2429f7d3d520a10b5e2cf81aabb145b2ae16e5fdf683a8 not found: ID does not exist" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.254879 5103 scope.go:117] "RemoveContainer" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" Jan 30 00:25:06 crc kubenswrapper[5103]: E0130 00:25:06.255129 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75\": container with ID starting with cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75 not found: ID does not exist" containerID="cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.255153 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75"} err="failed to get container status \"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75\": rpc error: code = NotFound desc = could not find container \"cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75\": container with ID starting with cea6e3289d70e5ac0688a12716d83d7408b90ba47efbd3b44ec6b8b5a1447b75 not found: ID does not exist" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.255179 5103 scope.go:117] "RemoveContainer" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" Jan 30 00:25:06 crc kubenswrapper[5103]: E0130 00:25:06.255366 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89\": container with ID starting with d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89 not found: ID does not exist" containerID="d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.255384 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89"} err="failed to get container status \"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89\": rpc error: code = NotFound desc = could not find container \"d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89\": container with ID starting with d563bcecde37ec56d5838dd4e0527af1a81ea8e5d0a76f6300dffb7253edad89 not found: ID does not exist" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.274811 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" (UID: "cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.345210 5103 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.517101 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.521121 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nv4qh"] Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.875363 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" path="/var/lib/kubelet/pods/cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21/volumes" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.994897 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75"] Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995629 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-content" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995651 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-content" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995696 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-utilities" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995705 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="extract-utilities" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995717 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995725 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" Jan 30 00:25:06 crc kubenswrapper[5103]: I0130 00:25:06.995845 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbd4fe92-9ae8-404f-b98b-eb54d0ea0d21" containerName="registry-server" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.011042 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.011212 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.013816 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.016572 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.017688 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-jdjvr\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.130923 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.135376 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.139532 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.139671 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-xn4jj\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.144078 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.147648 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.147978 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.153107 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkb5\" (UniqueName: \"kubernetes.io/projected/957968da-8046-4a89-91ac-ecb8c0e83e85-kube-api-access-xhkb5\") pod \"obo-prometheus-operator-9bc85b4bf-mmf75\" (UID: \"957968da-8046-4a89-91ac-ecb8c0e83e85\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.170436 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.254734 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.254797 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.254868 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.255013 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkb5\" (UniqueName: \"kubernetes.io/projected/957968da-8046-4a89-91ac-ecb8c0e83e85-kube-api-access-xhkb5\") pod \"obo-prometheus-operator-9bc85b4bf-mmf75\" (UID: \"957968da-8046-4a89-91ac-ecb8c0e83e85\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.255073 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.277111 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkb5\" (UniqueName: \"kubernetes.io/projected/957968da-8046-4a89-91ac-ecb8c0e83e85-kube-api-access-xhkb5\") pod \"obo-prometheus-operator-9bc85b4bf-mmf75\" (UID: \"957968da-8046-4a89-91ac-ecb8c0e83e85\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.320098 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jcs7p"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.325399 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.325497 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.326583 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jcs7p"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.327309 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-8trv4\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.327575 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359686 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359749 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359782 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.359845 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.363814 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.367480 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.370625 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp\" (UID: \"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.378563 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/888a411a-eaa9-4b4f-877b-0653ce686e73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf\" (UID: \"888a411a-eaa9-4b4f-877b-0653ce686e73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.453796 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.460642 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88fl\" (UniqueName: \"kubernetes.io/projected/e7d2bde2-5437-4672-b6b6-f2babe73dff0-kube-api-access-g88fl\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.460700 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7d2bde2-5437-4672-b6b6-f2babe73dff0-observability-operator-tls\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.462897 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.475645 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.531201 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5r6dq"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533003 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="extract" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533016 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="extract" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533059 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="util" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533065 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="util" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533076 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="pull" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533081 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="pull" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.533166 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b1decb0e-49d8-404d-966d-b8249754982f" containerName="extract" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.561877 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5r6dq"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.562020 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.562946 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") pod \"b1decb0e-49d8-404d-966d-b8249754982f\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.562994 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") pod \"b1decb0e-49d8-404d-966d-b8249754982f\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.563180 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") pod \"b1decb0e-49d8-404d-966d-b8249754982f\" (UID: \"b1decb0e-49d8-404d-966d-b8249754982f\") " Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.563403 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7d2bde2-5437-4672-b6b6-f2babe73dff0-observability-operator-tls\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.563486 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g88fl\" (UniqueName: \"kubernetes.io/projected/e7d2bde2-5437-4672-b6b6-f2babe73dff0-kube-api-access-g88fl\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.566112 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-4p58v\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.569447 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle" (OuterVolumeSpecName: "bundle") pod "b1decb0e-49d8-404d-966d-b8249754982f" (UID: "b1decb0e-49d8-404d-966d-b8249754982f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.569890 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7d2bde2-5437-4672-b6b6-f2babe73dff0-observability-operator-tls\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.570082 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9" (OuterVolumeSpecName: "kube-api-access-bs5v9") pod "b1decb0e-49d8-404d-966d-b8249754982f" (UID: "b1decb0e-49d8-404d-966d-b8249754982f"). InnerVolumeSpecName "kube-api-access-bs5v9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.597102 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util" (OuterVolumeSpecName: "util") pod "b1decb0e-49d8-404d-966d-b8249754982f" (UID: "b1decb0e-49d8-404d-966d-b8249754982f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.600511 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88fl\" (UniqueName: \"kubernetes.io/projected/e7d2bde2-5437-4672-b6b6-f2babe73dff0-kube-api-access-g88fl\") pod \"observability-operator-85c68dddb-jcs7p\" (UID: \"e7d2bde2-5437-4672-b6b6-f2babe73dff0\") " pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.643475 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664495 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664542 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcrv\" (UniqueName: \"kubernetes.io/projected/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-kube-api-access-6tcrv\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664675 5103 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664687 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bs5v9\" (UniqueName: \"kubernetes.io/projected/b1decb0e-49d8-404d-966d-b8249754982f-kube-api-access-bs5v9\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.664696 5103 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1decb0e-49d8-404d-966d-b8249754982f-util\") on node \"crc\" DevicePath \"\"" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.665579 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.760161 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf"] Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.767094 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.767142 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcrv\" (UniqueName: \"kubernetes.io/projected/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-kube-api-access-6tcrv\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.768233 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-openshift-service-ca\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.788367 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcrv\" (UniqueName: \"kubernetes.io/projected/8c7fdb9f-be0e-428a-88e1-283c31de8ad1-kube-api-access-6tcrv\") pod \"perses-operator-669c9f96b5-5r6dq\" (UID: \"8c7fdb9f-be0e-428a-88e1-283c31de8ad1\") " pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.805711 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp"] Jan 30 00:25:07 crc kubenswrapper[5103]: W0130 00:25:07.823169 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda88f3da2_a157_4b8b_9fe6_ff6ef7466a8d.slice/crio-8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778 WatchSource:0}: Error finding container 8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778: Status 404 returned error can't find the container with id 8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778 Jan 30 00:25:07 crc kubenswrapper[5103]: I0130 00:25:07.886764 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.127700 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-5r6dq"] Jan 30 00:25:08 crc kubenswrapper[5103]: W0130 00:25:08.133166 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7fdb9f_be0e_428a_88e1_283c31de8ad1.slice/crio-2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38 WatchSource:0}: Error finding container 2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38: Status 404 returned error can't find the container with id 2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38 Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.175908 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-jcs7p"] Jan 30 00:25:08 crc kubenswrapper[5103]: W0130 00:25:08.179744 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d2bde2_5437_4672_b6b6_f2babe73dff0.slice/crio-a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4 WatchSource:0}: Error finding container a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4: Status 404 returned error can't find the container with id a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4 Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.200960 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" event={"ID":"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d","Type":"ContainerStarted","Data":"8f700f980fe2979d89514d98be9c5d69db5f1f6fbf7db46df447a73234bd6778"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.202297 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" event={"ID":"888a411a-eaa9-4b4f-877b-0653ce686e73","Type":"ContainerStarted","Data":"870a5afa83e27078eaf7065dd2f85217830e3de67523125c47c4e1afe6e815dd"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.204873 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" event={"ID":"957968da-8046-4a89-91ac-ecb8c0e83e85","Type":"ContainerStarted","Data":"cfc450716b9dc4967b0b8bdc7c2d7267bad5d06a7ed252eeef41823ba91674f4"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.206356 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" event={"ID":"e7d2bde2-5437-4672-b6b6-f2babe73dff0","Type":"ContainerStarted","Data":"a50242013c5e625dd5ef6a1c002997b85fee0964d19cfd7eaaafbbe5c31eeee4"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.207274 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" event={"ID":"8c7fdb9f-be0e-428a-88e1-283c31de8ad1","Type":"ContainerStarted","Data":"2fa6a3392dfdfa9eb7123ad0a451652edf6df9d16741b52fcb6744a4ab43fe38"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.210247 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" event={"ID":"b1decb0e-49d8-404d-966d-b8249754982f","Type":"ContainerDied","Data":"d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844"} Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.210277 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2add8cc90da4c85a774a104509a8c88fc2b9ac09cbfa8a920227e4cb1710844" Jan 30 00:25:08 crc kubenswrapper[5103]: I0130 00:25:08.210375 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg" Jan 30 00:25:09 crc kubenswrapper[5103]: E0130 00:25:09.954192 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:25:09 crc kubenswrapper[5103]: E0130 00:25:09.954374 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:25:09 crc kubenswrapper[5103]: E0130 00:25:09.955596 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.416226 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.419841 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.432479 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:25:11 crc kubenswrapper[5103]: I0130 00:25:11.437461 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.268802 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n"] Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.274858 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.277537 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-c57bl\"" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.277640 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.289387 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.294775 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n"] Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.321316 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.321378 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsjh\" (UniqueName: \"kubernetes.io/projected/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-kube-api-access-lfsjh\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.423127 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.423189 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsjh\" (UniqueName: \"kubernetes.io/projected/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-kube-api-access-lfsjh\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.423765 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.460900 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsjh\" (UniqueName: \"kubernetes.io/projected/67b9f41f-8bca-414a-aabd-5398b6f1ffe6-kube-api-access-lfsjh\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-pl76n\" (UID: \"67b9f41f-8bca-414a-aabd-5398b6f1ffe6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:15 crc kubenswrapper[5103]: I0130 00:25:15.608491 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" Jan 30 00:25:19 crc kubenswrapper[5103]: I0130 00:25:19.648292 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n"] Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.335664 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" event={"ID":"e7d2bde2-5437-4672-b6b6-f2babe73dff0","Type":"ContainerStarted","Data":"648d99cc723956ede678fd843e24f55e68b1ac8c566d35346e833284c9d1828e"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.335886 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.337107 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" event={"ID":"8c7fdb9f-be0e-428a-88e1-283c31de8ad1","Type":"ContainerStarted","Data":"be0579b4eb21394cdc71d98a5d8cde738d0c294a1eb8b412899f2605ced8d92d"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.337359 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.338525 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" event={"ID":"67b9f41f-8bca-414a-aabd-5398b6f1ffe6","Type":"ContainerStarted","Data":"5298064fa6c1d9b59596c4422ff9d2f2f7038900596d440633b997a05b4313aa"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.340263 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" event={"ID":"a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d","Type":"ContainerStarted","Data":"ebd23426f4af1009cb12abd62a537705f14957a23150e93563be229fb417e68e"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.341844 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" event={"ID":"888a411a-eaa9-4b4f-877b-0653ce686e73","Type":"ContainerStarted","Data":"8e45ab069d765abb8028c0e37ab5817fff76356f6b91187b7cd925638c0600a5"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.343441 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" event={"ID":"957968da-8046-4a89-91ac-ecb8c0e83e85","Type":"ContainerStarted","Data":"ab45c908619101167cf24f626cd5f12cef86c0073baef3e5281ecef13e05355c"} Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.361892 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" podStartSLOduration=2.126964376 podStartE2EDuration="13.361878548s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:08.181471565 +0000 UTC m=+898.052969617" lastFinishedPulling="2026-01-30 00:25:19.416385737 +0000 UTC m=+909.287883789" observedRunningTime="2026-01-30 00:25:20.361105659 +0000 UTC m=+910.232603731" watchObservedRunningTime="2026-01-30 00:25:20.361878548 +0000 UTC m=+910.233376600" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.363863 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-jcs7p" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.388608 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-mmf75" podStartSLOduration=2.701486364 podStartE2EDuration="14.388589123s" podCreationTimestamp="2026-01-30 00:25:06 +0000 UTC" firstStartedPulling="2026-01-30 00:25:07.695802207 +0000 UTC m=+897.567300259" lastFinishedPulling="2026-01-30 00:25:19.382904966 +0000 UTC m=+909.254403018" observedRunningTime="2026-01-30 00:25:20.386762518 +0000 UTC m=+910.258260580" watchObservedRunningTime="2026-01-30 00:25:20.388589123 +0000 UTC m=+910.260087165" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.412066 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp" podStartSLOduration=1.835417461 podStartE2EDuration="13.412038729s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:07.826502994 +0000 UTC m=+897.698001046" lastFinishedPulling="2026-01-30 00:25:19.403124262 +0000 UTC m=+909.274622314" observedRunningTime="2026-01-30 00:25:20.406309168 +0000 UTC m=+910.277807230" watchObservedRunningTime="2026-01-30 00:25:20.412038729 +0000 UTC m=+910.283536781" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.439529 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf" podStartSLOduration=1.840816984 podStartE2EDuration="13.439505223s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:07.783837457 +0000 UTC m=+897.655335509" lastFinishedPulling="2026-01-30 00:25:19.382525686 +0000 UTC m=+909.254023748" observedRunningTime="2026-01-30 00:25:20.433734801 +0000 UTC m=+910.305232873" watchObservedRunningTime="2026-01-30 00:25:20.439505223 +0000 UTC m=+910.311003275" Jan 30 00:25:20 crc kubenswrapper[5103]: I0130 00:25:20.455154 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" podStartSLOduration=2.208552328 podStartE2EDuration="13.455131646s" podCreationTimestamp="2026-01-30 00:25:07 +0000 UTC" firstStartedPulling="2026-01-30 00:25:08.138925341 +0000 UTC m=+898.010423393" lastFinishedPulling="2026-01-30 00:25:19.385504659 +0000 UTC m=+909.257002711" observedRunningTime="2026-01-30 00:25:20.451715082 +0000 UTC m=+910.323213144" watchObservedRunningTime="2026-01-30 00:25:20.455131646 +0000 UTC m=+910.326629698" Jan 30 00:25:20 crc kubenswrapper[5103]: E0130 00:25:20.879136 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:24 crc kubenswrapper[5103]: I0130 00:25:24.367231 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" event={"ID":"67b9f41f-8bca-414a-aabd-5398b6f1ffe6","Type":"ContainerStarted","Data":"41f0a232689046faa91204aa3f8bbf1dd9cc89b25d442db880b06d793e18dbf8"} Jan 30 00:25:24 crc kubenswrapper[5103]: I0130 00:25:24.389745 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-pl76n" podStartSLOduration=6.150583442 podStartE2EDuration="9.389729174s" podCreationTimestamp="2026-01-30 00:25:15 +0000 UTC" firstStartedPulling="2026-01-30 00:25:19.655572686 +0000 UTC m=+909.527070738" lastFinishedPulling="2026-01-30 00:25:22.894718428 +0000 UTC m=+912.766216470" observedRunningTime="2026-01-30 00:25:24.386304309 +0000 UTC m=+914.257802381" watchObservedRunningTime="2026-01-30 00:25:24.389729174 +0000 UTC m=+914.261227226" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.139187 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2l6mr"] Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.146421 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.150099 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.150282 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.159888 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2l6mr"] Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.276496 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.276548 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbr8f\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-kube-api-access-jbr8f\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.381287 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.381355 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jbr8f\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-kube-api-access-jbr8f\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.413610 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.414677 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbr8f\" (UniqueName: \"kubernetes.io/projected/5a9a4930-567c-4924-a3e4-a28fd367a358-kube-api-access-jbr8f\") pod \"cert-manager-webhook-597b96b99b-2l6mr\" (UID: \"5a9a4930-567c-4924-a3e4-a28fd367a358\") " pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.460673 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:26 crc kubenswrapper[5103]: I0130 00:25:26.733267 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-2l6mr"] Jan 30 00:25:27 crc kubenswrapper[5103]: I0130 00:25:27.408990 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" event={"ID":"5a9a4930-567c-4924-a3e4-a28fd367a358","Type":"ContainerStarted","Data":"58edb06db0240c37bbccbb7d4f765f69e9919b1896479a20b7f81851f3cad749"} Jan 30 00:25:31 crc kubenswrapper[5103]: I0130 00:25:31.351996 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-5r6dq" Jan 30 00:25:32 crc kubenswrapper[5103]: E0130 00:25:32.106321 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:25:32 crc kubenswrapper[5103]: E0130 00:25:32.106818 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:25:32 crc kubenswrapper[5103]: E0130 00:25:32.108013 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.928500 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-jw2mg"] Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.935348 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.940865 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-jw2mg"] Jan 30 00:25:32 crc kubenswrapper[5103]: I0130 00:25:32.944828 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-tjl4n\"" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.080915 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.080967 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxgm\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-kube-api-access-ghxgm\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.182516 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.182600 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghxgm\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-kube-api-access-ghxgm\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.203048 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.203076 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghxgm\" (UniqueName: \"kubernetes.io/projected/c385ca3a-0d6e-45bd-9ac2-d2e884254487-kube-api-access-ghxgm\") pod \"cert-manager-cainjector-8966b78d4-jw2mg\" (UID: \"c385ca3a-0d6e-45bd-9ac2-d2e884254487\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.260660 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" Jan 30 00:25:33 crc kubenswrapper[5103]: I0130 00:25:33.508663 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-jw2mg"] Jan 30 00:25:34 crc kubenswrapper[5103]: I0130 00:25:34.457526 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" event={"ID":"c385ca3a-0d6e-45bd-9ac2-d2e884254487","Type":"ContainerStarted","Data":"ce10f2ccbd30e876c749dbab6deef12dccfc4ad494b9d944318a21860b4c555c"} Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.476613 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" event={"ID":"c385ca3a-0d6e-45bd-9ac2-d2e884254487","Type":"ContainerStarted","Data":"cae363e194e5ea8c9e412127092c8dc1d044f04f396d72d4c4c556a4cdb1a961"} Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.477766 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" event={"ID":"5a9a4930-567c-4924-a3e4-a28fd367a358","Type":"ContainerStarted","Data":"e8b9ff3b04833909d5a2d1b5ed1e0bc713b835f87fda3eefd60721dca9dda58a"} Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.478072 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.490709 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-jw2mg" podStartSLOduration=1.9689724690000001 podStartE2EDuration="5.490695265s" podCreationTimestamp="2026-01-30 00:25:32 +0000 UTC" firstStartedPulling="2026-01-30 00:25:33.513656476 +0000 UTC m=+923.385154528" lastFinishedPulling="2026-01-30 00:25:37.035379272 +0000 UTC m=+926.906877324" observedRunningTime="2026-01-30 00:25:37.489104966 +0000 UTC m=+927.360603028" watchObservedRunningTime="2026-01-30 00:25:37.490695265 +0000 UTC m=+927.362193317" Jan 30 00:25:37 crc kubenswrapper[5103]: I0130 00:25:37.512757 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" podStartSLOduration=1.238430475 podStartE2EDuration="11.512738766s" podCreationTimestamp="2026-01-30 00:25:26 +0000 UTC" firstStartedPulling="2026-01-30 00:25:26.7390235 +0000 UTC m=+916.610521552" lastFinishedPulling="2026-01-30 00:25:37.013331791 +0000 UTC m=+926.884829843" observedRunningTime="2026-01-30 00:25:37.511429044 +0000 UTC m=+927.382927106" watchObservedRunningTime="2026-01-30 00:25:37.512738766 +0000 UTC m=+927.384236828" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.466226 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-nxjsj"] Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.470778 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.473333 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-2cvbr\"" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.474527 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-nxjsj"] Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.583852 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-bound-sa-token\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.583975 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xjqq\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-kube-api-access-6xjqq\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.685158 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-bound-sa-token\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.685223 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6xjqq\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-kube-api-access-6xjqq\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.706192 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-bound-sa-token\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.716002 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xjqq\" (UniqueName: \"kubernetes.io/projected/a4645f5f-5b75-41a8-8a06-0a9b5be3e07f-kube-api-access-6xjqq\") pod \"cert-manager-759f64656b-nxjsj\" (UID: \"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f\") " pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:39 crc kubenswrapper[5103]: I0130 00:25:39.828893 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-nxjsj" Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.311189 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-nxjsj"] Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.527383 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-nxjsj" event={"ID":"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f","Type":"ContainerStarted","Data":"2390831b65b12887df4245ee0387db9210558e26783f75b78fb7d1dd9c53239c"} Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.527651 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-nxjsj" event={"ID":"a4645f5f-5b75-41a8-8a06-0a9b5be3e07f","Type":"ContainerStarted","Data":"eaf6630deb209a6fee9b99ff99bed41bc39273947b8aedc616f49ffe0ac86ef7"} Jan 30 00:25:40 crc kubenswrapper[5103]: I0130 00:25:40.555803 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-nxjsj" podStartSLOduration=1.555786986 podStartE2EDuration="1.555786986s" podCreationTimestamp="2026-01-30 00:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 00:25:40.555165491 +0000 UTC m=+930.426663553" watchObservedRunningTime="2026-01-30 00:25:40.555786986 +0000 UTC m=+930.427285038" Jan 30 00:25:43 crc kubenswrapper[5103]: I0130 00:25:43.489890 5103 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-2l6mr" Jan 30 00:25:44 crc kubenswrapper[5103]: E0130 00:25:44.870977 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:57 crc kubenswrapper[5103]: E0130 00:25:57.870642 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:25:58 crc kubenswrapper[5103]: I0130 00:25:58.493111 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:25:58 crc kubenswrapper[5103]: I0130 00:25:58.493201 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.140568 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.147673 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.151125 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.151261 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.151261 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.152584 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.279444 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"auto-csr-approver-29495546-b8rgh\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.380524 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"auto-csr-approver-29495546-b8rgh\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.408577 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"auto-csr-approver-29495546-b8rgh\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.473789 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.935435 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:26:00 crc kubenswrapper[5103]: I0130 00:26:00.945849 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:26:01 crc kubenswrapper[5103]: I0130 00:26:01.676749 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" event={"ID":"d4b28226-5bd7-4b43-aec3-648633cbde03","Type":"ContainerStarted","Data":"c0e7ad70a4c4e49251fc72024b8ec3c64147cb0ffa6e464b95896129e850700d"} Jan 30 00:26:02 crc kubenswrapper[5103]: I0130 00:26:02.689526 5103 generic.go:358] "Generic (PLEG): container finished" podID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerID="013351321e5d41d2ce75b5cd9d1d61d2f2152944d779218c070bb3e09843c3f2" exitCode=0 Jan 30 00:26:02 crc kubenswrapper[5103]: I0130 00:26:02.689680 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" event={"ID":"d4b28226-5bd7-4b43-aec3-648633cbde03","Type":"ContainerDied","Data":"013351321e5d41d2ce75b5cd9d1d61d2f2152944d779218c070bb3e09843c3f2"} Jan 30 00:26:03 crc kubenswrapper[5103]: I0130 00:26:03.966201 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.031422 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") pod \"d4b28226-5bd7-4b43-aec3-648633cbde03\" (UID: \"d4b28226-5bd7-4b43-aec3-648633cbde03\") " Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.036522 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j" (OuterVolumeSpecName: "kube-api-access-gcg4j") pod "d4b28226-5bd7-4b43-aec3-648633cbde03" (UID: "d4b28226-5bd7-4b43-aec3-648633cbde03"). InnerVolumeSpecName "kube-api-access-gcg4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.132709 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gcg4j\" (UniqueName: \"kubernetes.io/projected/d4b28226-5bd7-4b43-aec3-648633cbde03-kube-api-access-gcg4j\") on node \"crc\" DevicePath \"\"" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.705272 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.705294 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495546-b8rgh" event={"ID":"d4b28226-5bd7-4b43-aec3-648633cbde03","Type":"ContainerDied","Data":"c0e7ad70a4c4e49251fc72024b8ec3c64147cb0ffa6e464b95896129e850700d"} Jan 30 00:26:04 crc kubenswrapper[5103]: I0130 00:26:04.705788 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e7ad70a4c4e49251fc72024b8ec3c64147cb0ffa6e464b95896129e850700d" Jan 30 00:26:05 crc kubenswrapper[5103]: I0130 00:26:05.029202 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:26:05 crc kubenswrapper[5103]: I0130 00:26:05.032939 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495540-rtq7h"] Jan 30 00:26:06 crc kubenswrapper[5103]: I0130 00:26:06.880213 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b2c0b7-a88b-4f50-945a-938210a1c4cc" path="/var/lib/kubelet/pods/d6b2c0b7-a88b-4f50-945a-938210a1c4cc/volumes" Jan 30 00:26:10 crc kubenswrapper[5103]: E0130 00:26:10.872520 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:19 crc kubenswrapper[5103]: I0130 00:26:19.344663 5103 scope.go:117] "RemoveContainer" containerID="2aa077047165a4cd73187258a4227191c8d3c969d4671d6a4bcf6e0c0698cf60" Jan 30 00:26:24 crc kubenswrapper[5103]: E0130 00:26:24.808510 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:26:24 crc kubenswrapper[5103]: E0130 00:26:24.809455 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:26:24 crc kubenswrapper[5103]: E0130 00:26:24.810648 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:28 crc kubenswrapper[5103]: I0130 00:26:28.493315 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:26:28 crc kubenswrapper[5103]: I0130 00:26:28.493967 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:38 crc kubenswrapper[5103]: E0130 00:26:38.870517 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:51 crc kubenswrapper[5103]: E0130 00:26:51.872190 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.493571 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.494379 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.494479 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.495999 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:26:58 crc kubenswrapper[5103]: I0130 00:26:58.496189 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5" gracePeriod=600 Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069191 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5" exitCode=0 Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069238 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5"} Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069937 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac"} Jan 30 00:26:59 crc kubenswrapper[5103]: I0130 00:26:59.069965 5103 scope.go:117] "RemoveContainer" containerID="3697ed82987151cb30b4fcbd0a44d2a69c948067ae7968004a45b0cf18254730" Jan 30 00:27:06 crc kubenswrapper[5103]: E0130 00:27:06.876029 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:27:20 crc kubenswrapper[5103]: E0130 00:27:20.882654 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:27:34 crc kubenswrapper[5103]: E0130 00:27:34.871319 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:27:47 crc kubenswrapper[5103]: E0130 00:27:47.116445 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:27:47 crc kubenswrapper[5103]: E0130 00:27:47.117491 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:27:47 crc kubenswrapper[5103]: E0130 00:27:47.119603 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.146407 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.148242 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.148274 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.148475 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" containerName="oc" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.156183 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.156426 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.163705 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.163953 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.163987 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.268110 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"auto-csr-approver-29495548-mfl5j\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.369715 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"auto-csr-approver-29495548-mfl5j\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.396956 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"auto-csr-approver-29495548-mfl5j\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.480475 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:00 crc kubenswrapper[5103]: I0130 00:28:00.751361 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495548-mfl5j"] Jan 30 00:28:01 crc kubenswrapper[5103]: I0130 00:28:01.567353 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" event={"ID":"7e1187f4-b882-49e8-b76a-6a33d208d851","Type":"ContainerStarted","Data":"5bcf9357173d88e6df0d3b2e33dd2abdd84d18c5348156fff5fb2c30bd6cd088"} Jan 30 00:28:01 crc kubenswrapper[5103]: E0130 00:28:01.870368 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:02 crc kubenswrapper[5103]: I0130 00:28:02.587639 5103 generic.go:358] "Generic (PLEG): container finished" podID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerID="c2af859b78905cccbd737ba86e5a69188e15dc8b11ba5934dd036e2c842496f3" exitCode=0 Jan 30 00:28:02 crc kubenswrapper[5103]: I0130 00:28:02.587701 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" event={"ID":"7e1187f4-b882-49e8-b76a-6a33d208d851","Type":"ContainerDied","Data":"c2af859b78905cccbd737ba86e5a69188e15dc8b11ba5934dd036e2c842496f3"} Jan 30 00:28:03 crc kubenswrapper[5103]: I0130 00:28:03.981025 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.121306 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") pod \"7e1187f4-b882-49e8-b76a-6a33d208d851\" (UID: \"7e1187f4-b882-49e8-b76a-6a33d208d851\") " Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.133316 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb" (OuterVolumeSpecName: "kube-api-access-wxtmb") pod "7e1187f4-b882-49e8-b76a-6a33d208d851" (UID: "7e1187f4-b882-49e8-b76a-6a33d208d851"). InnerVolumeSpecName "kube-api-access-wxtmb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.223681 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wxtmb\" (UniqueName: \"kubernetes.io/projected/7e1187f4-b882-49e8-b76a-6a33d208d851-kube-api-access-wxtmb\") on node \"crc\" DevicePath \"\"" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.603787 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" event={"ID":"7e1187f4-b882-49e8-b76a-6a33d208d851","Type":"ContainerDied","Data":"5bcf9357173d88e6df0d3b2e33dd2abdd84d18c5348156fff5fb2c30bd6cd088"} Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.604065 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bcf9357173d88e6df0d3b2e33dd2abdd84d18c5348156fff5fb2c30bd6cd088" Jan 30 00:28:04 crc kubenswrapper[5103]: I0130 00:28:04.603936 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495548-mfl5j" Jan 30 00:28:05 crc kubenswrapper[5103]: I0130 00:28:05.061354 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:28:05 crc kubenswrapper[5103]: I0130 00:28:05.070496 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495542-lzgvl"] Jan 30 00:28:06 crc kubenswrapper[5103]: I0130 00:28:06.879160 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6eabbd6-7a3e-476d-9412-948faeb44ce2" path="/var/lib/kubelet/pods/b6eabbd6-7a3e-476d-9412-948faeb44ce2/volumes" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.230196 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.231826 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerName="oc" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.231841 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerName="oc" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.231996 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e1187f4-b882-49e8-b76a-6a33d208d851" containerName="oc" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.243996 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.246441 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.248375 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-twd9h\"/\"openshift-service-ca.crt\"" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.248491 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-twd9h\"/\"default-dockercfg-z9hl2\"" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.248910 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-twd9h\"/\"kube-root-ca.crt\"" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.343761 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.343923 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.444992 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.445128 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.445551 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.469688 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"must-gather-9ltrv\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.573493 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:28:12 crc kubenswrapper[5103]: I0130 00:28:12.852924 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:28:12 crc kubenswrapper[5103]: W0130 00:28:12.857724 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d4f962b_cbec_41d6_9514_8d19a9455156.slice/crio-93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7 WatchSource:0}: Error finding container 93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7: Status 404 returned error can't find the container with id 93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7 Jan 30 00:28:13 crc kubenswrapper[5103]: I0130 00:28:13.682270 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerStarted","Data":"93995129edc69df60d55c656961a88d9f887029b9a30c064c5fa1c106695e0f7"} Jan 30 00:28:15 crc kubenswrapper[5103]: E0130 00:28:15.870662 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:18 crc kubenswrapper[5103]: I0130 00:28:18.717466 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerStarted","Data":"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da"} Jan 30 00:28:18 crc kubenswrapper[5103]: I0130 00:28:18.717832 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerStarted","Data":"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90"} Jan 30 00:28:18 crc kubenswrapper[5103]: I0130 00:28:18.738141 5103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-twd9h/must-gather-9ltrv" podStartSLOduration=1.596602337 podStartE2EDuration="6.738115719s" podCreationTimestamp="2026-01-30 00:28:12 +0000 UTC" firstStartedPulling="2026-01-30 00:28:12.859861929 +0000 UTC m=+1082.731359981" lastFinishedPulling="2026-01-30 00:28:18.001375311 +0000 UTC m=+1087.872873363" observedRunningTime="2026-01-30 00:28:18.735250839 +0000 UTC m=+1088.606748931" watchObservedRunningTime="2026-01-30 00:28:18.738115719 +0000 UTC m=+1088.609613811" Jan 30 00:28:19 crc kubenswrapper[5103]: I0130 00:28:19.464661 5103 scope.go:117] "RemoveContainer" containerID="85eb57e0bc83856f4d4d5eb131d80fc4f6400f67738b8a99f839b0af0918444e" Jan 30 00:28:29 crc kubenswrapper[5103]: E0130 00:28:29.874245 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:41 crc kubenswrapper[5103]: E0130 00:28:41.872838 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:54 crc kubenswrapper[5103]: E0130 00:28:54.874692 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:28:58 crc kubenswrapper[5103]: I0130 00:28:58.493301 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:28:58 crc kubenswrapper[5103]: I0130 00:28:58.493702 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:01 crc kubenswrapper[5103]: I0130 00:29:01.282078 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-94r9t_35998b47-ed37-4a50-9553-18147918d9cb/control-plane-machine-set-operator/0.log" Jan 30 00:29:01 crc kubenswrapper[5103]: I0130 00:29:01.440031 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-5tp7b_f3b3db2b-ab99-483b-a13c-4947269bc330/kube-rbac-proxy/0.log" Jan 30 00:29:01 crc kubenswrapper[5103]: I0130 00:29:01.492290 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-5tp7b_f3b3db2b-ab99-483b-a13c-4947269bc330/machine-api-operator/0.log" Jan 30 00:29:06 crc kubenswrapper[5103]: E0130 00:29:06.870955 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:14 crc kubenswrapper[5103]: I0130 00:29:14.156524 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-nxjsj_a4645f5f-5b75-41a8-8a06-0a9b5be3e07f/cert-manager-controller/0.log" Jan 30 00:29:14 crc kubenswrapper[5103]: I0130 00:29:14.241042 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-jw2mg_c385ca3a-0d6e-45bd-9ac2-d2e884254487/cert-manager-cainjector/0.log" Jan 30 00:29:14 crc kubenswrapper[5103]: I0130 00:29:14.310772 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-2l6mr_5a9a4930-567c-4924-a3e4-a28fd367a358/cert-manager-webhook/0.log" Jan 30 00:29:19 crc kubenswrapper[5103]: E0130 00:29:19.870632 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:27 crc kubenswrapper[5103]: I0130 00:29:27.823283 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-mmf75_957968da-8046-4a89-91ac-ecb8c0e83e85/prometheus-operator/0.log" Jan 30 00:29:27 crc kubenswrapper[5103]: I0130 00:29:27.904316 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp_a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.012177 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf_888a411a-eaa9-4b4f-877b-0653ce686e73/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.080527 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-jcs7p_e7d2bde2-5437-4672-b6b6-f2babe73dff0/operator/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.203979 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-5r6dq_8c7fdb9f-be0e-428a-88e1-283c31de8ad1/perses-operator/0.log" Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.493601 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:28 crc kubenswrapper[5103]: I0130 00:29:28.494162 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:30 crc kubenswrapper[5103]: E0130 00:29:30.886186 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:41 crc kubenswrapper[5103]: E0130 00:29:41.870650 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.141343 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_969009ac-f9ae-48c0-b45e-bf9a5844b7ff/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.299490 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_969009ac-f9ae-48c0-b45e-bf9a5844b7ff/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.511040 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_969009ac-f9ae-48c0-b45e-bf9a5844b7ff/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.663709 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.865199 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/util/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.882126 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/pull/0.log" Jan 30 00:29:42 crc kubenswrapper[5103]: I0130 00:29:42.923842 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.048330 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/extract/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.055313 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.078968 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tsdvg_b1decb0e-49d8-404d-966d-b8249754982f/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.217641 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.370550 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.372706 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.412219 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.555285 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/util/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.556860 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/extract/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.560774 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ckhcj_34d7632a-f4e3-4dcf-bf9a-ab1e24880a6e/pull/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.709689 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-utilities/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.863680 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-utilities/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.898481 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-content/0.log" Jan 30 00:29:43 crc kubenswrapper[5103]: I0130 00:29:43.910890 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.086240 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.123856 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.227901 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vq6tr_a044cd80-0a4b-43d0-bfa8-107bddaa28fc/registry-server/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.260648 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.440257 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.449800 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.457874 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.640497 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-utilities/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.646153 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/extract-content/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.773666 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-m7wbv_0180b3c6-131f-4a8c-ac9a-1b410e056ae2/marketplace-operator/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.838294 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4gz47_5fd1ccc1-87a2-43d0-9183-1e907f804a16/registry-server/0.log" Jan 30 00:29:44 crc kubenswrapper[5103]: I0130 00:29:44.857514 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-utilities/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.030620 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-content/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.035625 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-utilities/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.049358 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-content/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.176133 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-utilities/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.197917 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/extract-content/0.log" Jan 30 00:29:45 crc kubenswrapper[5103]: I0130 00:29:45.327804 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wmvfq_fc2ed764-8df0-4a15-9d66-c2abad3ee367/registry-server/0.log" Jan 30 00:29:54 crc kubenswrapper[5103]: E0130 00:29:54.871317 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.809612 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-n7vkp_a88f3da2-a157-4b8b-9fe6-ff6ef7466a8d/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.826548 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-mmf75_957968da-8046-4a89-91ac-ecb8c0e83e85/prometheus-operator/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.839959 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4b8cbb9-wl7kf_888a411a-eaa9-4b4f-877b-0653ce686e73/prometheus-operator-admission-webhook/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.910902 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-jcs7p_e7d2bde2-5437-4672-b6b6-f2babe73dff0/operator/0.log" Jan 30 00:29:57 crc kubenswrapper[5103]: I0130 00:29:57.962415 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-5r6dq_8c7fdb9f-be0e-428a-88e1-283c31de8ad1/perses-operator/0.log" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.492820 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.492910 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.492965 5103 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.493642 5103 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac"} pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 00:29:58 crc kubenswrapper[5103]: I0130 00:29:58.493719 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" containerID="cri-o://5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac" gracePeriod=600 Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366258 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerDied","Data":"5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac"} Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366233 5103 generic.go:358] "Generic (PLEG): container finished" podID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerID="5856a55629d30d7d4fa88897c40b9c60d5a5ab3108816f0b3070ca166d2f7fac" exitCode=0 Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366635 5103 scope.go:117] "RemoveContainer" containerID="90100aee52a55e7e0a5d62a8eaebe5bce65c117790d10db4419165f35e2674a5" Jan 30 00:29:59 crc kubenswrapper[5103]: I0130 00:29:59.366708 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" event={"ID":"37f6985e-a0c9-43c8-a1bc-00f85204425f","Type":"ContainerStarted","Data":"130ec4cee10ecca2dbb2494485497c4c74cdc5486f58130b4af70c708c33184f"} Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.135985 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495550-h2pkv"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.141264 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.141659 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.143591 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.145021 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.146853 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.157338 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-h2pkv"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.157377 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.157523 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.179698 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.181543 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249459 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249788 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249809 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.249829 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"auto-csr-approver-29495550-h2pkv\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351241 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351288 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351307 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.351457 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"auto-csr-approver-29495550-h2pkv\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.352356 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.358475 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.369515 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"collect-profiles-29495550-zgbsm\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.383320 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"auto-csr-approver-29495550-h2pkv\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.456506 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.482130 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.668207 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495550-h2pkv"] Jan 30 00:30:00 crc kubenswrapper[5103]: I0130 00:30:00.709077 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm"] Jan 30 00:30:00 crc kubenswrapper[5103]: W0130 00:30:00.714143 5103 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9a13ac8_6221_4293_b335_523278207648.slice/crio-99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41 WatchSource:0}: Error finding container 99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41: Status 404 returned error can't find the container with id 99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41 Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.384833 5103 generic.go:358] "Generic (PLEG): container finished" podID="d9a13ac8-6221-4293-b335-523278207648" containerID="f158a1749d81beabc76f102764ffb4986db8d74961f96e0199513925785628df" exitCode=0 Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.384897 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" event={"ID":"d9a13ac8-6221-4293-b335-523278207648","Type":"ContainerDied","Data":"f158a1749d81beabc76f102764ffb4986db8d74961f96e0199513925785628df"} Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.385225 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" event={"ID":"d9a13ac8-6221-4293-b335-523278207648","Type":"ContainerStarted","Data":"99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41"} Jan 30 00:30:01 crc kubenswrapper[5103]: I0130 00:30:01.388889 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" event={"ID":"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b","Type":"ContainerStarted","Data":"65796c5027130313151c38769c983755d4471154a7b9d3fb4f885fb6d7f10ea5"} Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.683636 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.798573 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") pod \"d9a13ac8-6221-4293-b335-523278207648\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.798642 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") pod \"d9a13ac8-6221-4293-b335-523278207648\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.798769 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") pod \"d9a13ac8-6221-4293-b335-523278207648\" (UID: \"d9a13ac8-6221-4293-b335-523278207648\") " Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.801129 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9a13ac8-6221-4293-b335-523278207648" (UID: "d9a13ac8-6221-4293-b335-523278207648"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.807616 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9a13ac8-6221-4293-b335-523278207648" (UID: "d9a13ac8-6221-4293-b335-523278207648"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.807648 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv" (OuterVolumeSpecName: "kube-api-access-rjcsv") pod "d9a13ac8-6221-4293-b335-523278207648" (UID: "d9a13ac8-6221-4293-b335-523278207648"). InnerVolumeSpecName "kube-api-access-rjcsv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.900371 5103 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a13ac8-6221-4293-b335-523278207648-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.900418 5103 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9a13ac8-6221-4293-b335-523278207648-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:02 crc kubenswrapper[5103]: I0130 00:30:02.900436 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjcsv\" (UniqueName: \"kubernetes.io/projected/d9a13ac8-6221-4293-b335-523278207648-kube-api-access-rjcsv\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.406073 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" event={"ID":"d9a13ac8-6221-4293-b335-523278207648","Type":"ContainerDied","Data":"99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41"} Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.406347 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99319fd205de14d71c5f414e182b53d333a3c99c51c387a4a559430e31052c41" Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.406202 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495550-zgbsm" Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.410493 5103 generic.go:358] "Generic (PLEG): container finished" podID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerID="1fd4fa358eb20ef4f4388ad86d8aea58f2fc537950e57966f555eb8df763b409" exitCode=0 Jan 30 00:30:03 crc kubenswrapper[5103]: I0130 00:30:03.410550 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" event={"ID":"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b","Type":"ContainerDied","Data":"1fd4fa358eb20ef4f4388ad86d8aea58f2fc537950e57966f555eb8df763b409"} Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.709080 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.830484 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") pod \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\" (UID: \"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b\") " Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.839343 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j" (OuterVolumeSpecName: "kube-api-access-6sg8j") pod "b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" (UID: "b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b"). InnerVolumeSpecName "kube-api-access-6sg8j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:04 crc kubenswrapper[5103]: I0130 00:30:04.931793 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6sg8j\" (UniqueName: \"kubernetes.io/projected/b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b-kube-api-access-6sg8j\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.428774 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" event={"ID":"b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b","Type":"ContainerDied","Data":"65796c5027130313151c38769c983755d4471154a7b9d3fb4f885fb6d7f10ea5"} Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.428845 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65796c5027130313151c38769c983755d4471154a7b9d3fb4f885fb6d7f10ea5" Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.428794 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495550-h2pkv" Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.790738 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:30:05 crc kubenswrapper[5103]: I0130 00:30:05.794275 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495544-kj6vw"] Jan 30 00:30:06 crc kubenswrapper[5103]: I0130 00:30:06.879271 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad58695-120d-466b-bec0-3198637da77d" path="/var/lib/kubelet/pods/5ad58695-120d-466b-bec0-3198637da77d/volumes" Jan 30 00:30:10 crc kubenswrapper[5103]: E0130 00:30:09.870894 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.545769 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.545812 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swfns_a7dd7e02-4357-4643-8c23-2fb57ba70405/kube-multus/0.log" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.548151 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:30:11 crc kubenswrapper[5103]: I0130 00:30:11.548458 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 30 00:30:19 crc kubenswrapper[5103]: I0130 00:30:19.594925 5103 scope.go:117] "RemoveContainer" containerID="cc6d50dd8cf2d79869118c21971c35ee57934965ea393fbb5dc64b460746ac0e" Jan 30 00:30:20 crc kubenswrapper[5103]: E0130 00:30:20.883908 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:32 crc kubenswrapper[5103]: E0130 00:30:32.621709 5103 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" image="registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb" Jan 30 00:30:32 crc kubenswrapper[5103]: E0130 00:30:32.622560 5103 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000240000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc_openshift-marketplace(969009ac-f9ae-48c0-b45e-bf9a5844b7ff): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \"https://registry.connect.redhat.com/v2/\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving" logger="UnhandledError" Jan 30 00:30:32 crc kubenswrapper[5103]: E0130 00:30:32.624523 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:36 crc kubenswrapper[5103]: I0130 00:30:36.685478 5103 generic.go:358] "Generic (PLEG): container finished" podID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" exitCode=0 Jan 30 00:30:36 crc kubenswrapper[5103]: I0130 00:30:36.685588 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-twd9h/must-gather-9ltrv" event={"ID":"5d4f962b-cbec-41d6-9514-8d19a9455156","Type":"ContainerDied","Data":"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90"} Jan 30 00:30:36 crc kubenswrapper[5103]: I0130 00:30:36.686431 5103 scope.go:117] "RemoveContainer" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:37 crc kubenswrapper[5103]: I0130 00:30:37.144749 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-twd9h_must-gather-9ltrv_5d4f962b-cbec-41d6-9514-8d19a9455156/gather/0.log" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.256646 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.257679 5103 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-twd9h/must-gather-9ltrv" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" containerID="cri-o://47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" gracePeriod=2 Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.262031 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-twd9h/must-gather-9ltrv"] Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.264432 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.584033 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-twd9h_must-gather-9ltrv_5d4f962b-cbec-41d6-9514-8d19a9455156/copy/0.log" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.585002 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.586676 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742217 5103 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-twd9h_must-gather-9ltrv_5d4f962b-cbec-41d6-9514-8d19a9455156/copy/0.log" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742690 5103 generic.go:358] "Generic (PLEG): container finished" podID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" exitCode=143 Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742761 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-twd9h/must-gather-9ltrv" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.742912 5103 scope.go:117] "RemoveContainer" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.744469 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.750001 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") pod \"5d4f962b-cbec-41d6-9514-8d19a9455156\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.750096 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") pod \"5d4f962b-cbec-41d6-9514-8d19a9455156\" (UID: \"5d4f962b-cbec-41d6-9514-8d19a9455156\") " Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.760252 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff" (OuterVolumeSpecName: "kube-api-access-59tff") pod "5d4f962b-cbec-41d6-9514-8d19a9455156" (UID: "5d4f962b-cbec-41d6-9514-8d19a9455156"). InnerVolumeSpecName "kube-api-access-59tff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.763156 5103 scope.go:117] "RemoveContainer" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.792026 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5d4f962b-cbec-41d6-9514-8d19a9455156" (UID: "5d4f962b-cbec-41d6-9514-8d19a9455156"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.814886 5103 scope.go:117] "RemoveContainer" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" Jan 30 00:30:43 crc kubenswrapper[5103]: E0130 00:30:43.815275 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da\": container with ID starting with 47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da not found: ID does not exist" containerID="47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.815517 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da"} err="failed to get container status \"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da\": rpc error: code = NotFound desc = could not find container \"47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da\": container with ID starting with 47e4386342669a03080225fcab5eb51e8b7c0eb570a5f6e1af958d9a125888da not found: ID does not exist" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.815597 5103 scope.go:117] "RemoveContainer" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:43 crc kubenswrapper[5103]: E0130 00:30:43.815949 5103 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90\": container with ID starting with 93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90 not found: ID does not exist" containerID="93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.815977 5103 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90"} err="failed to get container status \"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90\": rpc error: code = NotFound desc = could not find container \"93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90\": container with ID starting with 93cd50467e28c96ba4779aa6ade18528e46f3b87c58bc804a1ab7ca48d2f1f90 not found: ID does not exist" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.851374 5103 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d4f962b-cbec-41d6-9514-8d19a9455156-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:43 crc kubenswrapper[5103]: I0130 00:30:43.851409 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-59tff\" (UniqueName: \"kubernetes.io/projected/5d4f962b-cbec-41d6-9514-8d19a9455156-kube-api-access-59tff\") on node \"crc\" DevicePath \"\"" Jan 30 00:30:44 crc kubenswrapper[5103]: I0130 00:30:44.061971 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:44 crc kubenswrapper[5103]: E0130 00:30:44.873190 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:30:44 crc kubenswrapper[5103]: I0130 00:30:44.882000 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" path="/var/lib/kubelet/pods/5d4f962b-cbec-41d6-9514-8d19a9455156/volumes" Jan 30 00:30:44 crc kubenswrapper[5103]: I0130 00:30:44.899195 5103 status_manager.go:895] "Failed to get status for pod" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" pod="openshift-must-gather-twd9h/must-gather-9ltrv" err="pods \"must-gather-9ltrv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-twd9h\": no relationship found between node 'crc' and this object" Jan 30 00:30:57 crc kubenswrapper[5103]: E0130 00:30:57.870525 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:09 crc kubenswrapper[5103]: I0130 00:31:09.974915 5103 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 00:31:09 crc kubenswrapper[5103]: E0130 00:31:09.976011 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:23 crc kubenswrapper[5103]: E0130 00:31:23.871021 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:35 crc kubenswrapper[5103]: E0130 00:31:35.872231 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:48 crc kubenswrapper[5103]: E0130 00:31:48.871007 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:31:58 crc kubenswrapper[5103]: I0130 00:31:58.493235 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:31:58 crc kubenswrapper[5103]: I0130 00:31:58.493619 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.148228 5103 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29495552-j4vvz"] Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149427 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerName="oc" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149449 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerName="oc" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149482 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="gather" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149491 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="gather" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149506 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149515 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149538 5103 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9a13ac8-6221-4293-b335-523278207648" containerName="collect-profiles" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149546 5103 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a13ac8-6221-4293-b335-523278207648" containerName="collect-profiles" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149655 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9a13ac8-6221-4293-b335-523278207648" containerName="collect-profiles" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149670 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="copy" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149683 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d4f962b-cbec-41d6-9514-8d19a9455156" containerName="gather" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.149695 5103 memory_manager.go:356] "RemoveStaleState removing state" podUID="b030c7cb-04f1-4e89-918b-aa3b8fbe4e1b" containerName="oc" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.154643 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.157255 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.158394 5103 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.158552 5103 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-bq2dh\"" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.170708 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495552-j4vvz"] Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.271245 5103 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"auto-csr-approver-29495552-j4vvz\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.373902 5103 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"auto-csr-approver-29495552-j4vvz\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.395668 5103 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"auto-csr-approver-29495552-j4vvz\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.486660 5103 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:00 crc kubenswrapper[5103]: I0130 00:32:00.766908 5103 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29495552-j4vvz"] Jan 30 00:32:00 crc kubenswrapper[5103]: E0130 00:32:00.880869 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:01 crc kubenswrapper[5103]: I0130 00:32:01.428938 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" event={"ID":"ceb0ecdd-c611-4860-853f-570beffcf4e5","Type":"ContainerStarted","Data":"2538de0811533f2e719ed480e05bdab21c82292f7102f3f09b96fd3e8a3f6c42"} Jan 30 00:32:02 crc kubenswrapper[5103]: I0130 00:32:02.437708 5103 generic.go:358] "Generic (PLEG): container finished" podID="ceb0ecdd-c611-4860-853f-570beffcf4e5" containerID="7ee18343af479626b3c9134413db1d2ecff31943eaa3b062712f47bcf2e15ba3" exitCode=0 Jan 30 00:32:02 crc kubenswrapper[5103]: I0130 00:32:02.437848 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" event={"ID":"ceb0ecdd-c611-4860-853f-570beffcf4e5","Type":"ContainerDied","Data":"7ee18343af479626b3c9134413db1d2ecff31943eaa3b062712f47bcf2e15ba3"} Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.746376 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.823698 5103 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") pod \"ceb0ecdd-c611-4860-853f-570beffcf4e5\" (UID: \"ceb0ecdd-c611-4860-853f-570beffcf4e5\") " Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.831032 5103 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc" (OuterVolumeSpecName: "kube-api-access-d8jtc") pod "ceb0ecdd-c611-4860-853f-570beffcf4e5" (UID: "ceb0ecdd-c611-4860-853f-570beffcf4e5"). InnerVolumeSpecName "kube-api-access-d8jtc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 00:32:03 crc kubenswrapper[5103]: I0130 00:32:03.926258 5103 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d8jtc\" (UniqueName: \"kubernetes.io/projected/ceb0ecdd-c611-4860-853f-570beffcf4e5-kube-api-access-d8jtc\") on node \"crc\" DevicePath \"\"" Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.456179 5103 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.456230 5103 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29495552-j4vvz" event={"ID":"ceb0ecdd-c611-4860-853f-570beffcf4e5","Type":"ContainerDied","Data":"2538de0811533f2e719ed480e05bdab21c82292f7102f3f09b96fd3e8a3f6c42"} Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.456746 5103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2538de0811533f2e719ed480e05bdab21c82292f7102f3f09b96fd3e8a3f6c42" Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.829828 5103 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.839080 5103 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29495546-b8rgh"] Jan 30 00:32:04 crc kubenswrapper[5103]: I0130 00:32:04.877647 5103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b28226-5bd7-4b43-aec3-648633cbde03" path="/var/lib/kubelet/pods/d4b28226-5bd7-4b43-aec3-648633cbde03/volumes" Jan 30 00:32:11 crc kubenswrapper[5103]: E0130 00:32:11.871148 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:19 crc kubenswrapper[5103]: I0130 00:32:19.757392 5103 scope.go:117] "RemoveContainer" containerID="013351321e5d41d2ce75b5cd9d1d61d2f2152944d779218c070bb3e09843c3f2" Jan 30 00:32:26 crc kubenswrapper[5103]: E0130 00:32:26.871882 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff" Jan 30 00:32:28 crc kubenswrapper[5103]: I0130 00:32:28.492960 5103 patch_prober.go:28] interesting pod/machine-config-daemon-6g6hp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 00:32:28 crc kubenswrapper[5103]: I0130 00:32:28.493043 5103 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-6g6hp" podUID="37f6985e-a0c9-43c8-a1bc-00f85204425f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 00:32:37 crc kubenswrapper[5103]: E0130 00:32:37.870200 5103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://registry.connect.redhat.com/elastic/eck@sha256:815e6949d8b96d832660e6ed715f8fbf080b230f1bccfc3e0f38781585b14eeb: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving; artifact err: get manifest: build image source: pinging container registry registry.connect.redhat.com: Get \\\"https://registry.connect.redhat.com/v2/\\\": dial tcp: lookup registry.connect.redhat.com on 199.204.47.54:53: server misbehaving\"" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5er47bc" podUID="969009ac-f9ae-48c0-b45e-bf9a5844b7ff"