Jan 20 09:08:09 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 09:08:09 crc kubenswrapper[5115]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:08:09 crc kubenswrapper[5115]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 09:08:09 crc kubenswrapper[5115]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:08:09 crc kubenswrapper[5115]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:08:09 crc kubenswrapper[5115]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 09:08:09 crc kubenswrapper[5115]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.957713 5115 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960886 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960918 5115 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960923 5115 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960927 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960930 5115 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960935 5115 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960939 5115 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960944 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960948 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960952 5115 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960957 5115 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960961 5115 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960966 5115 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960970 5115 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960975 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960980 5115 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960985 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960990 5115 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960994 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.960997 5115 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961012 5115 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961017 5115 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961020 5115 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961024 5115 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961028 5115 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961031 5115 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961035 5115 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961038 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961041 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961045 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961048 5115 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961051 5115 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961054 5115 feature_gate.go:328] unrecognized feature gate: Example Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961057 5115 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961060 5115 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961064 5115 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961067 5115 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961070 5115 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961076 5115 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961080 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961084 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961087 5115 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961090 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961093 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961097 5115 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961101 5115 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961104 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961108 5115 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961112 5115 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961116 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961120 5115 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961124 5115 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961127 5115 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961131 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961135 5115 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961138 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961141 5115 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961145 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961148 5115 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961151 5115 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961155 5115 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961158 5115 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961161 5115 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961164 5115 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961167 5115 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961171 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961174 5115 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961177 5115 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961180 5115 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961183 5115 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961186 5115 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961190 5115 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961193 5115 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961196 5115 feature_gate.go:328] unrecognized feature gate: Example2 Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961200 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961203 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961206 5115 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961210 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961214 5115 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961217 5115 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961220 5115 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961288 5115 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961296 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961301 5115 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961306 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961309 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961867 5115 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961889 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961919 5115 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961924 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961927 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961931 5115 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961934 5115 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961938 5115 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961941 5115 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961945 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961948 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961951 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961955 5115 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961958 5115 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961961 5115 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961965 5115 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961968 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961972 5115 feature_gate.go:328] unrecognized feature gate: Example Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961977 5115 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961981 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961985 5115 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961989 5115 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961993 5115 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.961997 5115 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962000 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962003 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962007 5115 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962010 5115 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962014 5115 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962017 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962020 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962023 5115 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962028 5115 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962031 5115 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962035 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962038 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962041 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962044 5115 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962048 5115 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962051 5115 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962054 5115 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962059 5115 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962063 5115 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962066 5115 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962070 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962074 5115 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962077 5115 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962080 5115 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962084 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962098 5115 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962101 5115 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962105 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962108 5115 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962112 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962115 5115 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962118 5115 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962123 5115 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962126 5115 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962130 5115 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962136 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962140 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962143 5115 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962146 5115 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962150 5115 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962153 5115 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962157 5115 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962161 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962164 5115 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962168 5115 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962171 5115 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962174 5115 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962178 5115 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962181 5115 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962185 5115 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962188 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962191 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962196 5115 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962199 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962202 5115 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962206 5115 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962209 5115 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962215 5115 feature_gate.go:328] unrecognized feature gate: Example2 Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962218 5115 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962221 5115 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962224 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.962228 5115 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962523 5115 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962537 5115 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962547 5115 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962553 5115 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962559 5115 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962566 5115 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962571 5115 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962577 5115 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962581 5115 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962584 5115 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962588 5115 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962592 5115 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962605 5115 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962608 5115 flags.go:64] FLAG: --cgroup-root="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962613 5115 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962617 5115 flags.go:64] FLAG: --client-ca-file="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962621 5115 flags.go:64] FLAG: --cloud-config="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962624 5115 flags.go:64] FLAG: --cloud-provider="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962628 5115 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962632 5115 flags.go:64] FLAG: --cluster-domain="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962636 5115 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962640 5115 flags.go:64] FLAG: --config-dir="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962643 5115 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962648 5115 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962657 5115 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962661 5115 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962665 5115 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962670 5115 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962674 5115 flags.go:64] FLAG: --contention-profiling="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962678 5115 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962682 5115 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962686 5115 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962690 5115 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962695 5115 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962699 5115 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962702 5115 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962706 5115 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962712 5115 flags.go:64] FLAG: --enable-server="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962716 5115 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962721 5115 flags.go:64] FLAG: --event-burst="100" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962725 5115 flags.go:64] FLAG: --event-qps="50" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962728 5115 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962732 5115 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962736 5115 flags.go:64] FLAG: --eviction-hard="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962741 5115 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962745 5115 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962749 5115 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962753 5115 flags.go:64] FLAG: --eviction-soft="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962757 5115 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962760 5115 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962764 5115 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962768 5115 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962771 5115 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962775 5115 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962778 5115 flags.go:64] FLAG: --feature-gates="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962783 5115 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962787 5115 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962791 5115 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962795 5115 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962816 5115 flags.go:64] FLAG: --healthz-port="10248" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962821 5115 flags.go:64] FLAG: --help="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962825 5115 flags.go:64] FLAG: --hostname-override="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962829 5115 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962833 5115 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962837 5115 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962840 5115 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962844 5115 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962848 5115 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962852 5115 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962858 5115 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962862 5115 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962865 5115 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962869 5115 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962873 5115 flags.go:64] FLAG: --kube-reserved="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962877 5115 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962880 5115 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962884 5115 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962888 5115 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962907 5115 flags.go:64] FLAG: --lock-file="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962911 5115 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962915 5115 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962918 5115 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962925 5115 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962929 5115 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962933 5115 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962953 5115 flags.go:64] FLAG: --logging-format="text" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962957 5115 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962961 5115 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962965 5115 flags.go:64] FLAG: --manifest-url="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962969 5115 flags.go:64] FLAG: --manifest-url-header="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962975 5115 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962991 5115 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.962996 5115 flags.go:64] FLAG: --max-pods="110" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963000 5115 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963004 5115 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963008 5115 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963012 5115 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963016 5115 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963019 5115 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963023 5115 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963034 5115 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963040 5115 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963044 5115 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963048 5115 flags.go:64] FLAG: --pod-cidr="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963051 5115 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963061 5115 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963065 5115 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963070 5115 flags.go:64] FLAG: --pods-per-core="0" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963074 5115 flags.go:64] FLAG: --port="10250" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963079 5115 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963084 5115 flags.go:64] FLAG: --provider-id="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963090 5115 flags.go:64] FLAG: --qos-reserved="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963095 5115 flags.go:64] FLAG: --read-only-port="10255" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963099 5115 flags.go:64] FLAG: --register-node="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963104 5115 flags.go:64] FLAG: --register-schedulable="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963108 5115 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963120 5115 flags.go:64] FLAG: --registry-burst="10" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963124 5115 flags.go:64] FLAG: --registry-qps="5" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963128 5115 flags.go:64] FLAG: --reserved-cpus="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963133 5115 flags.go:64] FLAG: --reserved-memory="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963137 5115 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963141 5115 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963146 5115 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963151 5115 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963155 5115 flags.go:64] FLAG: --runonce="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963159 5115 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963162 5115 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963166 5115 flags.go:64] FLAG: --seccomp-default="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963170 5115 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963174 5115 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963179 5115 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963182 5115 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963186 5115 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963190 5115 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963199 5115 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963203 5115 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963206 5115 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963210 5115 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963214 5115 flags.go:64] FLAG: --system-cgroups="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963217 5115 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963223 5115 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963226 5115 flags.go:64] FLAG: --tls-cert-file="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963230 5115 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963234 5115 flags.go:64] FLAG: --tls-min-version="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963238 5115 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963242 5115 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963246 5115 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963249 5115 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963253 5115 flags.go:64] FLAG: --v="2" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963258 5115 flags.go:64] FLAG: --version="false" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963263 5115 flags.go:64] FLAG: --vmodule="" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963269 5115 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963273 5115 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963357 5115 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963361 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963367 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963371 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963375 5115 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963379 5115 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963382 5115 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963386 5115 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963390 5115 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963394 5115 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963397 5115 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963400 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963403 5115 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963428 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963432 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963436 5115 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963439 5115 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963442 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963445 5115 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963449 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963452 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963456 5115 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963460 5115 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963464 5115 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963469 5115 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963473 5115 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963477 5115 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963481 5115 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963485 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963489 5115 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963495 5115 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963500 5115 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963505 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963509 5115 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963517 5115 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963521 5115 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963525 5115 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963529 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963532 5115 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963536 5115 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963540 5115 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963543 5115 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963546 5115 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963549 5115 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963553 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963576 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963580 5115 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963583 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963587 5115 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963590 5115 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963595 5115 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963599 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963603 5115 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963607 5115 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963611 5115 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963616 5115 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963620 5115 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963626 5115 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963630 5115 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963634 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963638 5115 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963643 5115 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963647 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963651 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963655 5115 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963659 5115 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963666 5115 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963670 5115 feature_gate.go:328] unrecognized feature gate: Example2 Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963674 5115 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963678 5115 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963682 5115 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963686 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963690 5115 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963693 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963697 5115 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963700 5115 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963704 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963709 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963713 5115 feature_gate.go:328] unrecognized feature gate: Example Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963716 5115 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963719 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963722 5115 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963726 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963729 5115 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963733 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.963737 5115 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.963927 5115 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.980773 5115 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.980834 5115 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.980997 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981019 5115 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981025 5115 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981029 5115 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981038 5115 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981044 5115 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981049 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981053 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981057 5115 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981062 5115 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981067 5115 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981073 5115 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981081 5115 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981085 5115 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981091 5115 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981096 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981100 5115 feature_gate.go:328] unrecognized feature gate: Example2 Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981104 5115 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981108 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981112 5115 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981116 5115 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981120 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981124 5115 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981128 5115 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981131 5115 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981135 5115 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981139 5115 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981144 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981148 5115 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981153 5115 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981157 5115 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981162 5115 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981167 5115 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981171 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981175 5115 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981181 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981185 5115 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981190 5115 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981199 5115 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981208 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981213 5115 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981218 5115 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981224 5115 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981229 5115 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981234 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981239 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981244 5115 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981249 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981254 5115 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981260 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981264 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981269 5115 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981273 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981278 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981282 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981286 5115 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981291 5115 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981295 5115 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981299 5115 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981303 5115 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981309 5115 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981313 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981318 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981325 5115 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981330 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981334 5115 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981338 5115 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981343 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981347 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981351 5115 feature_gate.go:328] unrecognized feature gate: Example Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981357 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981362 5115 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981367 5115 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981374 5115 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981379 5115 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981384 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981404 5115 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981409 5115 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981413 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981417 5115 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981421 5115 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981425 5115 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981430 5115 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981434 5115 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981438 5115 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981442 5115 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.981450 5115 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981673 5115 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981684 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981689 5115 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981693 5115 feature_gate.go:328] unrecognized feature gate: Example2 Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981698 5115 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981702 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981706 5115 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981711 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981715 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981719 5115 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981724 5115 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981729 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981733 5115 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981737 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981741 5115 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981745 5115 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981750 5115 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981754 5115 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981759 5115 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981764 5115 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981768 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981772 5115 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981787 5115 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981794 5115 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981798 5115 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981804 5115 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981809 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981814 5115 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981819 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981824 5115 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981829 5115 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981833 5115 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981838 5115 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981844 5115 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981849 5115 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981854 5115 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981858 5115 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981863 5115 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981868 5115 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981872 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981876 5115 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981881 5115 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981885 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981907 5115 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981912 5115 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981917 5115 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981921 5115 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981925 5115 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981930 5115 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981935 5115 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981939 5115 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981944 5115 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981948 5115 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981952 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981956 5115 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981970 5115 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981974 5115 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981979 5115 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981983 5115 feature_gate.go:328] unrecognized feature gate: Example Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981987 5115 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981991 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981995 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.981999 5115 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982004 5115 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982008 5115 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982013 5115 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982016 5115 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982021 5115 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982025 5115 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982029 5115 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982033 5115 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982037 5115 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982041 5115 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982046 5115 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982050 5115 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982054 5115 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982059 5115 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982063 5115 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982068 5115 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982072 5115 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982078 5115 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982083 5115 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982087 5115 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982092 5115 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982097 5115 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 20 09:08:09 crc kubenswrapper[5115]: W0120 09:08:09.982101 5115 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.982107 5115 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.982420 5115 server.go:962] "Client rotation is on, will bootstrap in background" Jan 20 09:08:09 crc kubenswrapper[5115]: E0120 09:08:09.985720 5115 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.989246 5115 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.989380 5115 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.990030 5115 server.go:1019] "Starting client certificate rotation" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.990155 5115 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.990205 5115 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.996594 5115 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 09:08:09 crc kubenswrapper[5115]: E0120 09:08:09.998883 5115 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 09:08:09 crc kubenswrapper[5115]: I0120 09:08:09.998908 5115 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.008872 5115 log.go:25] "Validated CRI v1 runtime API" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.028445 5115 log.go:25] "Validated CRI v1 image API" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.030084 5115 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.033587 5115 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-20-09-02-02-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.033644 5115 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.058875 5115 manager.go:217] Machine: {Timestamp:2026-01-20 09:08:10.056803576 +0000 UTC m=+0.225582146 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:4e7ead0d-ccd6-45dd-b671-f46e59803438 BootID:f3c68733-f696-46f4-a86e-b22c133b82e3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:19:c2:37 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:19:c2:37 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b8:f3:30 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fb:e4:69 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:88:82:74 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:21:67:c9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ae:b4:d2:1f:55:09 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:cd:d3:f2:d7:b6 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.059353 5115 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.059619 5115 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.061024 5115 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.061086 5115 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.061399 5115 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.061420 5115 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.061458 5115 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.061703 5115 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.062349 5115 state_mem.go:36] "Initialized new in-memory state store" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.062619 5115 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.063534 5115 kubelet.go:491] "Attempting to sync node with API server" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.063563 5115 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.063599 5115 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.063622 5115 kubelet.go:397] "Adding apiserver pod source" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.063648 5115 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.066628 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.066662 5115 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.066864 5115 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.067088 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.068651 5115 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.068724 5115 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.071324 5115 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.071695 5115 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.072437 5115 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.072966 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073053 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073109 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073155 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073201 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073256 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073304 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073356 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073452 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073520 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073615 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.073793 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.074269 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.074415 5115 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.075685 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.086793 5115 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.086968 5115 server.go:1295] "Started kubelet" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.087671 5115 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.087780 5115 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 09:08:10 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.087961 5115 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.091677 5115 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.091689 5115 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.092992 5115 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.093009 5115 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.093109 5115 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.093139 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.093172 5115 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.106016 5115 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.106051 5115 factory.go:55] Registering systemd factory Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.106063 5115 factory.go:223] Registration of the systemd container factory successfully Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.106066 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.106290 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.106422 5115 server.go:317] "Adding debug handlers to kubelet server" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.106192 5115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c654288c9b628 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.08692996 +0000 UTC m=+0.255708500,LastTimestamp:2026-01-20 09:08:10.08692996 +0000 UTC m=+0.255708500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.107434 5115 factory.go:153] Registering CRI-O factory Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.107478 5115 factory.go:223] Registration of the crio container factory successfully Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.107517 5115 factory.go:103] Registering Raw factory Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.107634 5115 manager.go:1196] Started watching for new ooms in manager Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.108416 5115 manager.go:319] Starting recovery of all containers Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.144951 5115 manager.go:324] Recovery completed Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.147016 5115 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/crc-pullsecret.service": inotify_add_watch /sys/fs/cgroup/system.slice/crc-pullsecret.service: no such file or directory Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156233 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156342 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156384 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156495 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156513 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156536 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156671 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156824 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156870 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156918 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156936 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156952 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156971 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.156988 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157012 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157027 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157047 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157061 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157081 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157095 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157148 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157167 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157184 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157203 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157215 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157234 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157250 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157268 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157295 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157314 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157330 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157377 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157402 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157424 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157451 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157467 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157487 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157504 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157525 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157541 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157559 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157581 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157599 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157621 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157639 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157660 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157679 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157699 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157721 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157738 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157768 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157787 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157803 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157822 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157841 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157863 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157921 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157967 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.157988 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158021 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158044 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158060 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158078 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158093 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158110 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158132 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158144 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158162 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158177 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158196 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158211 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158227 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158245 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158261 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158280 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158299 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158318 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158337 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158354 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158375 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158391 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158409 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158424 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158441 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158455 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158469 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158488 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158502 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158518 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158534 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158548 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158566 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158582 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.158602 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.163488 5115 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164844 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164867 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164882 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164909 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164924 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164956 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164970 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164984 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.164997 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165011 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165022 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165036 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165048 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165062 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165076 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165090 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165102 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165114 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165155 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165167 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.165179 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174873 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174904 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174917 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174927 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174939 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174949 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174975 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174986 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.174999 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175008 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175018 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175028 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175040 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175077 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175087 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175097 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175107 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175132 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175142 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175152 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175161 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175172 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175184 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175193 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175203 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175214 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175225 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175234 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175247 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175260 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175270 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175281 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175291 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175302 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175315 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175325 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175335 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175346 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175360 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175371 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175384 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175418 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175429 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175440 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175450 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175460 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175471 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175480 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175490 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175517 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175529 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175540 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175551 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175561 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175571 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175582 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175592 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175602 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175611 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175622 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175648 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175659 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175668 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175677 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175687 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175696 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175704 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175714 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175725 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175735 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175746 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175757 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175767 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175779 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175790 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175798 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175810 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175809 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.175822 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177083 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177130 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177150 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177168 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177183 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177199 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177215 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177229 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177245 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177262 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177276 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177290 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177305 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177319 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177334 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177349 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177365 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177382 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177396 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177412 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177429 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177445 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177461 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177477 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177491 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177505 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177518 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177533 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177547 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177562 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177576 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177688 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177705 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177718 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177733 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177747 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177763 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177779 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177794 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177806 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177820 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177834 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177846 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177860 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177873 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177887 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177920 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177934 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177951 5115 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177965 5115 reconstruct.go:97] "Volume reconstruction finished" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.177974 5115 reconciler.go:26] "Reconciler: start to sync state" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.179777 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.179813 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.179836 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.192143 5115 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.192164 5115 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.192192 5115 state_mem.go:36] "Initialized new in-memory state store" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.193779 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.198075 5115 policy_none.go:49] "None policy: Start" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.198206 5115 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.199567 5115 state_mem.go:35] "Initializing new in-memory state store" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.213085 5115 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.215510 5115 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.215585 5115 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.215642 5115 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.215659 5115 kubelet.go:2451] "Starting kubelet main sync loop" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.215803 5115 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.217322 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.258015 5115 manager.go:341] "Starting Device Plugin manager" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.258498 5115 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.258522 5115 server.go:85] "Starting device plugin registration server" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.259332 5115 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.259358 5115 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.259636 5115 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.259786 5115 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.259800 5115 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.264553 5115 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.264625 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.307731 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.316726 5115 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.316978 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.317991 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.318040 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.318055 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.318810 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.319090 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.319222 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.319712 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.319745 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.319759 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.320053 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.320101 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.320120 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.320431 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.320763 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.320847 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.320992 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.321045 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.321062 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.321558 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.321589 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.321603 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.322156 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.322254 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.322293 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.322638 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.322666 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.322676 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.323025 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.323100 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.323156 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.323405 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.323468 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.323484 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.324247 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.324269 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.324285 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.324594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.324616 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.324628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.325432 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.325470 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.326153 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.326173 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.326183 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.360345 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.361566 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.361636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.361655 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.361699 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.362595 5115 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.375269 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.385670 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.412560 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.433961 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.442815 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486209 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486319 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486548 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486619 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486646 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486692 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486717 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486756 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486823 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486927 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.486990 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487027 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487089 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487123 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487193 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487239 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487270 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487315 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487426 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487516 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487612 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487836 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487866 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487953 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.487969 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.488038 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.488086 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.488211 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.488335 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.488650 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.563350 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.564620 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.564673 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.564686 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.564720 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.571543 5115 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.588828 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.588917 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.588948 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.588970 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.588992 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589006 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589087 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589132 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589164 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589183 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589217 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589013 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589271 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589293 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589304 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589312 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589342 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589355 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589372 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589386 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589402 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589410 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589432 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589456 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589482 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589481 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589523 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589544 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589433 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589458 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589594 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.589733 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.676527 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.689406 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: W0120 09:08:10.702480 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-6ac953a84cbedcf8336d41b880fa36478c96e441654e32a0826ebaae73f773a0 WatchSource:0}: Error finding container 6ac953a84cbedcf8336d41b880fa36478c96e441654e32a0826ebaae73f773a0: Status 404 returned error can't find the container with id 6ac953a84cbedcf8336d41b880fa36478c96e441654e32a0826ebaae73f773a0 Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.708959 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.711570 5115 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.712807 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: W0120 09:08:10.713882 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-599649bbce797a2940f6ef2b48b21541c4f29a1451aa128fa0916e3ca3d23f80 WatchSource:0}: Error finding container 599649bbce797a2940f6ef2b48b21541c4f29a1451aa128fa0916e3ca3d23f80: Status 404 returned error can't find the container with id 599649bbce797a2940f6ef2b48b21541c4f29a1451aa128fa0916e3ca3d23f80 Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.735611 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.744118 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:10 crc kubenswrapper[5115]: W0120 09:08:10.744491 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-0c434758e6e9146827245a5ae9ad4f26779e19f2474d8e2ec2f6da8ef3ada11b WatchSource:0}: Error finding container 0c434758e6e9146827245a5ae9ad4f26779e19f2474d8e2ec2f6da8ef3ada11b: Status 404 returned error can't find the container with id 0c434758e6e9146827245a5ae9ad4f26779e19f2474d8e2ec2f6da8ef3ada11b Jan 20 09:08:10 crc kubenswrapper[5115]: W0120 09:08:10.765477 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-eeb498b8aae79106f276c60c46b6794f42e474f505e6bf43269cb2af478ea690 WatchSource:0}: Error finding container eeb498b8aae79106f276c60c46b6794f42e474f505e6bf43269cb2af478ea690: Status 404 returned error can't find the container with id eeb498b8aae79106f276c60c46b6794f42e474f505e6bf43269cb2af478ea690 Jan 20 09:08:10 crc kubenswrapper[5115]: W0120 09:08:10.773051 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-37f6a235e1f041c61105b992ed669a24f8b79ec2fc081b5edabfe0da53b8e0b4 WatchSource:0}: Error finding container 37f6a235e1f041c61105b992ed669a24f8b79ec2fc081b5edabfe0da53b8e0b4: Status 404 returned error can't find the container with id 37f6a235e1f041c61105b992ed669a24f8b79ec2fc081b5edabfe0da53b8e0b4 Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.971985 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.973581 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.973661 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.973680 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:10 crc kubenswrapper[5115]: I0120 09:08:10.973727 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.974634 5115 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 20 09:08:10 crc kubenswrapper[5115]: E0120 09:08:10.992614 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.077490 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.221584 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"6ac953a84cbedcf8336d41b880fa36478c96e441654e32a0826ebaae73f773a0"} Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.223001 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"37f6a235e1f041c61105b992ed669a24f8b79ec2fc081b5edabfe0da53b8e0b4"} Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.224447 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"eeb498b8aae79106f276c60c46b6794f42e474f505e6bf43269cb2af478ea690"} Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.225944 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0c434758e6e9146827245a5ae9ad4f26779e19f2474d8e2ec2f6da8ef3ada11b"} Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.227273 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"599649bbce797a2940f6ef2b48b21541c4f29a1451aa128fa0916e3ca3d23f80"} Jan 20 09:08:11 crc kubenswrapper[5115]: E0120 09:08:11.333722 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:08:11 crc kubenswrapper[5115]: E0120 09:08:11.510406 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Jan 20 09:08:11 crc kubenswrapper[5115]: E0120 09:08:11.559851 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:08:11 crc kubenswrapper[5115]: E0120 09:08:11.652971 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.775797 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.776872 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.776932 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.776948 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:11 crc kubenswrapper[5115]: I0120 09:08:11.776980 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:11 crc kubenswrapper[5115]: E0120 09:08:11.777467 5115 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.077252 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.128530 5115 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 20 09:08:12 crc kubenswrapper[5115]: E0120 09:08:12.130110 5115 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.232001 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d"} Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.232062 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32"} Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.232072 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447"} Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.233617 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006" exitCode=0 Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.233886 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.234164 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006"} Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.234700 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.234733 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.234744 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:12 crc kubenswrapper[5115]: E0120 09:08:12.235002 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.236906 5115 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35" exitCode=0 Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.236976 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35"} Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.237034 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.237303 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.237831 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.237884 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.237917 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.238248 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.238271 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6"} Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.238304 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.238377 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.238256 5115 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6" exitCode=0 Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.238379 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:12 crc kubenswrapper[5115]: E0120 09:08:12.238881 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.240112 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.240137 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.240147 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:12 crc kubenswrapper[5115]: E0120 09:08:12.240323 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.240750 5115 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4" exitCode=0 Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.240781 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4"} Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.241757 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.242320 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.242350 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:12 crc kubenswrapper[5115]: I0120 09:08:12.242359 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:12 crc kubenswrapper[5115]: E0120 09:08:12.242511 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.076650 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.111194 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.245485 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.245555 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.245571 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.247111 5115 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4" exitCode=0 Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.247173 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.247405 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.248413 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.248446 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.248468 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.248703 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.253508 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.253989 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.254823 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.254868 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.254882 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.255106 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.259341 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.259403 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.259411 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.259417 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.259798 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.259828 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.259841 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.260033 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.270712 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9"} Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.270883 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.274353 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.274413 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.274430 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.274724 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.377526 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.378546 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.378592 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.378605 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:13 crc kubenswrapper[5115]: I0120 09:08:13.378632 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.379158 5115 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.403076 5115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c654288c9b628 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.08692996 +0000 UTC m=+0.255708500,LastTimestamp:2026-01-20 09:08:10.08692996 +0000 UTC m=+0.255708500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.448507 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:13 crc kubenswrapper[5115]: E0120 09:08:13.513542 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.278166 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2"} Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.278263 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df"} Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.278384 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.279661 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.279735 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.279751 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:14 crc kubenswrapper[5115]: E0120 09:08:14.280138 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.281379 5115 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151" exitCode=0 Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.281544 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151"} Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.281589 5115 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.281728 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.281738 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.281877 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282648 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282675 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282704 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282727 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282657 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282752 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282760 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282731 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.282774 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:14 crc kubenswrapper[5115]: E0120 09:08:14.283500 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:14 crc kubenswrapper[5115]: E0120 09:08:14.283648 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:14 crc kubenswrapper[5115]: E0120 09:08:14.283765 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.284599 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.285285 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.285325 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.285339 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:14 crc kubenswrapper[5115]: E0120 09:08:14.285613 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:14 crc kubenswrapper[5115]: I0120 09:08:14.863351 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.288519 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab"} Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.288593 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a"} Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.288609 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025"} Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.288727 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.288725 5115 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.288846 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.289253 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.289279 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.289287 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:15 crc kubenswrapper[5115]: E0120 09:08:15.289623 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.289730 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.289748 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.289757 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:15 crc kubenswrapper[5115]: E0120 09:08:15.290027 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:15 crc kubenswrapper[5115]: I0120 09:08:15.347727 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.301244 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8"} Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.301347 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0"} Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.301512 5115 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.301589 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.301660 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.302790 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.302871 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.302949 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.302984 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.303063 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.303092 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:16 crc kubenswrapper[5115]: E0120 09:08:16.303887 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:16 crc kubenswrapper[5115]: E0120 09:08:16.304275 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.327442 5115 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.580182 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.581855 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.581970 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.581993 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.582046 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.799737 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.800150 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.801490 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.801552 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:16 crc kubenswrapper[5115]: I0120 09:08:16.801563 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:16 crc kubenswrapper[5115]: E0120 09:08:16.801983 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:17 crc kubenswrapper[5115]: I0120 09:08:17.303580 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:17 crc kubenswrapper[5115]: I0120 09:08:17.304472 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:17 crc kubenswrapper[5115]: I0120 09:08:17.304556 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:17 crc kubenswrapper[5115]: I0120 09:08:17.304588 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:17 crc kubenswrapper[5115]: E0120 09:08:17.305241 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:17 crc kubenswrapper[5115]: I0120 09:08:17.534069 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.306289 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.307082 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.307152 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.307168 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:18 crc kubenswrapper[5115]: E0120 09:08:18.307807 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.330565 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.330968 5115 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.331215 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.332223 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.332283 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.332296 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:18 crc kubenswrapper[5115]: E0120 09:08:18.332784 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.388020 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.805027 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.805399 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.806579 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.806624 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:18 crc kubenswrapper[5115]: I0120 09:08:18.806636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:18 crc kubenswrapper[5115]: E0120 09:08:18.806944 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.299472 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.299812 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.301099 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.301149 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.301163 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:19 crc kubenswrapper[5115]: E0120 09:08:19.301473 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.307552 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.308428 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.308757 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.309077 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.309107 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.309118 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:19 crc kubenswrapper[5115]: E0120 09:08:19.309379 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.310031 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.310067 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.310111 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:19 crc kubenswrapper[5115]: E0120 09:08:19.310514 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.800549 5115 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.800652 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 20 09:08:19 crc kubenswrapper[5115]: I0120 09:08:19.846038 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:20 crc kubenswrapper[5115]: E0120 09:08:20.265193 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:08:20 crc kubenswrapper[5115]: I0120 09:08:20.311588 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:20 crc kubenswrapper[5115]: I0120 09:08:20.312556 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:20 crc kubenswrapper[5115]: I0120 09:08:20.312640 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:20 crc kubenswrapper[5115]: I0120 09:08:20.312668 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:20 crc kubenswrapper[5115]: E0120 09:08:20.313346 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:23 crc kubenswrapper[5115]: I0120 09:08:23.216400 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 09:08:23 crc kubenswrapper[5115]: I0120 09:08:23.216879 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:23 crc kubenswrapper[5115]: I0120 09:08:23.218856 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:23 crc kubenswrapper[5115]: I0120 09:08:23.218926 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:23 crc kubenswrapper[5115]: I0120 09:08:23.218944 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:23 crc kubenswrapper[5115]: E0120 09:08:23.219589 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:24 crc kubenswrapper[5115]: I0120 09:08:24.077648 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 20 09:08:24 crc kubenswrapper[5115]: I0120 09:08:24.444104 5115 trace.go:236] Trace[688829078]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:08:14.442) (total time: 10001ms): Jan 20 09:08:24 crc kubenswrapper[5115]: Trace[688829078]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:08:24.443) Jan 20 09:08:24 crc kubenswrapper[5115]: Trace[688829078]: [10.001517272s] [10.001517272s] END Jan 20 09:08:24 crc kubenswrapper[5115]: E0120 09:08:24.444179 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:08:24 crc kubenswrapper[5115]: I0120 09:08:24.760609 5115 trace.go:236] Trace[997671750]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:08:14.758) (total time: 10001ms): Jan 20 09:08:24 crc kubenswrapper[5115]: Trace[997671750]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:08:24.760) Jan 20 09:08:24 crc kubenswrapper[5115]: Trace[997671750]: [10.001670776s] [10.001670776s] END Jan 20 09:08:24 crc kubenswrapper[5115]: E0120 09:08:24.760689 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:25 crc kubenswrapper[5115]: I0120 09:08:25.992008 5115 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 09:08:25 crc kubenswrapper[5115]: I0120 09:08:25.992119 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 09:08:25 crc kubenswrapper[5115]: I0120 09:08:25.999445 5115 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 09:08:25 crc kubenswrapper[5115]: I0120 09:08:25.999530 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 09:08:26 crc kubenswrapper[5115]: E0120 09:08:26.311862 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 20 09:08:27 crc kubenswrapper[5115]: E0120 09:08:27.794416 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:08:28 crc kubenswrapper[5115]: I0120 09:08:28.341263 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:28 crc kubenswrapper[5115]: I0120 09:08:28.342136 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:28 crc kubenswrapper[5115]: I0120 09:08:28.343488 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:28 crc kubenswrapper[5115]: I0120 09:08:28.343704 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:28 crc kubenswrapper[5115]: I0120 09:08:28.343867 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:28 crc kubenswrapper[5115]: E0120 09:08:28.344745 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:28 crc kubenswrapper[5115]: I0120 09:08:28.351330 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:29 crc kubenswrapper[5115]: I0120 09:08:29.344944 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:29 crc kubenswrapper[5115]: I0120 09:08:29.346529 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:29 crc kubenswrapper[5115]: I0120 09:08:29.346656 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:29 crc kubenswrapper[5115]: I0120 09:08:29.346752 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:29 crc kubenswrapper[5115]: E0120 09:08:29.347296 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:29 crc kubenswrapper[5115]: I0120 09:08:29.800799 5115 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:08:29 crc kubenswrapper[5115]: I0120 09:08:29.800950 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:08:29 crc kubenswrapper[5115]: E0120 09:08:29.984401 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.265546 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.317798 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.318046 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.319353 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.319481 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.319564 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.320058 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.979061 5115 trace.go:236] Trace[762597897]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:08:18.150) (total time: 12828ms): Jan 20 09:08:30 crc kubenswrapper[5115]: Trace[762597897]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12828ms (09:08:30.978) Jan 20 09:08:30 crc kubenswrapper[5115]: Trace[762597897]: [12.828898255s] [12.828898255s] END Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.979779 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.979597 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.979646 5115 trace.go:236] Trace[1180439721]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 09:08:18.138) (total time: 12841ms): Jan 20 09:08:30 crc kubenswrapper[5115]: Trace[1180439721]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 12841ms (09:08:30.979) Jan 20 09:08:30 crc kubenswrapper[5115]: Trace[1180439721]: [12.841350639s] [12.841350639s] END Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.980122 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.979040 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c654288c9b628 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.08692996 +0000 UTC m=+0.255708500,LastTimestamp:2026-01-20 09:08:10.08692996 +0000 UTC m=+0.255708500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.980736 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.982088 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:30 crc kubenswrapper[5115]: I0120 09:08:30.982231 5115 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.987049 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:30 crc kubenswrapper[5115]: E0120 09:08:30.994099 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.000803 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c654293293622 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.260960802 +0000 UTC m=+0.429739332,LastTimestamp:2026-01-20 09:08:10.260960802 +0000 UTC m=+0.429739332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.011071 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.31801407 +0000 UTC m=+0.486792600,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.018575 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.31804838 +0000 UTC m=+0.486826910,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022494 5115 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44822->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022548 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44822->192.168.126.11:17697: read: connection reset by peer" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022763 5115 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022787 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.022501 5115 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.023108 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.025524 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.31806176 +0000 UTC m=+0.486840290,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.030749 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.319732275 +0000 UTC m=+0.488510805,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.037888 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.319751605 +0000 UTC m=+0.488530135,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.043647 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.319764465 +0000 UTC m=+0.488542995,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.053617 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.320076586 +0000 UTC m=+0.488855116,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.065349 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.320113696 +0000 UTC m=+0.488892226,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.073553 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.320126766 +0000 UTC m=+0.488905296,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.078814 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.321020199 +0000 UTC m=+0.489798729,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.079789 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.080318 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.321055739 +0000 UTC m=+0.489834269,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.084678 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.321068959 +0000 UTC m=+0.489847489,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.088518 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.32157449 +0000 UTC m=+0.490353020,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.090981 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.32159447 +0000 UTC m=+0.490373000,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.093266 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.32160933 +0000 UTC m=+0.490387860,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.099501 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.322657463 +0000 UTC m=+0.491435993,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.101467 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.322671653 +0000 UTC m=+0.491450183,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.107649 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e5373a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e5373a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179842977 +0000 UTC m=+0.348621507,LastTimestamp:2026-01-20 09:08:10.322681913 +0000 UTC m=+0.491460443,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.112062 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e52c433\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e52c433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179798067 +0000 UTC m=+0.348576587,LastTimestamp:2026-01-20 09:08:10.323060844 +0000 UTC m=+0.491839404,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.118062 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188c65428e532987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188c65428e532987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.179824007 +0000 UTC m=+0.348602557,LastTimestamp:2026-01-20 09:08:10.323114794 +0000 UTC m=+0.491893364,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.133375 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6542ae0c0834 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.712033332 +0000 UTC m=+0.880811862,LastTimestamp:2026-01-20 09:08:10.712033332 +0000 UTC m=+0.880811862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.139998 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6542aed618ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.725275819 +0000 UTC m=+0.894054399,LastTimestamp:2026-01-20 09:08:10.725275819 +0000 UTC m=+0.894054399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.144393 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6542b0297a83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.747517571 +0000 UTC m=+0.916296131,LastTimestamp:2026-01-20 09:08:10.747517571 +0000 UTC m=+0.916296131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.148997 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542b1a133d1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.772141009 +0000 UTC m=+0.940919549,LastTimestamp:2026-01-20 09:08:10.772141009 +0000 UTC m=+0.940919549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.153855 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c6542b1f6b4d2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:10.777744594 +0000 UTC m=+0.946523134,LastTimestamp:2026-01-20 09:08:10.777744594 +0000 UTC m=+0.946523134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.158548 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c6542d0bebc22 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.294170146 +0000 UTC m=+1.462948676,LastTimestamp:2026-01-20 09:08:11.294170146 +0000 UTC m=+1.462948676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.163957 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542d0ca95d0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.294946768 +0000 UTC m=+1.463725298,LastTimestamp:2026-01-20 09:08:11.294946768 +0000 UTC m=+1.463725298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.169101 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6542d0ca827a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.294941818 +0000 UTC m=+1.463720348,LastTimestamp:2026-01-20 09:08:11.294941818 +0000 UTC m=+1.463720348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.173943 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6542d0ef81e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.297366504 +0000 UTC m=+1.466145034,LastTimestamp:2026-01-20 09:08:11.297366504 +0000 UTC m=+1.466145034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.189222 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6542d158d5cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.304269263 +0000 UTC m=+1.473047793,LastTimestamp:2026-01-20 09:08:11.304269263 +0000 UTC m=+1.473047793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.193715 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542d187e76a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.307353962 +0000 UTC m=+1.476132492,LastTimestamp:2026-01-20 09:08:11.307353962 +0000 UTC m=+1.476132492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.198544 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542d1a2cfd5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.309117397 +0000 UTC m=+1.477895937,LastTimestamp:2026-01-20 09:08:11.309117397 +0000 UTC m=+1.477895937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.204097 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c6542d1eb98c6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.31388743 +0000 UTC m=+1.482665960,LastTimestamp:2026-01-20 09:08:11.31388743 +0000 UTC m=+1.482665960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.211997 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6542d21751b2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.316752818 +0000 UTC m=+1.485531348,LastTimestamp:2026-01-20 09:08:11.316752818 +0000 UTC m=+1.485531348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.220473 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6542d2225d17 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.317476631 +0000 UTC m=+1.486255161,LastTimestamp:2026-01-20 09:08:11.317476631 +0000 UTC m=+1.486255161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.229727 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6542d2255413 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.317670931 +0000 UTC m=+1.486449471,LastTimestamp:2026-01-20 09:08:11.317670931 +0000 UTC m=+1.486449471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.234880 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542e3068e78 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.600866936 +0000 UTC m=+1.769645496,LastTimestamp:2026-01-20 09:08:11.600866936 +0000 UTC m=+1.769645496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.247635 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542e3923b21 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.610020641 +0000 UTC m=+1.778799171,LastTimestamp:2026-01-20 09:08:11.610020641 +0000 UTC m=+1.778799171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.258614 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542e3a40098 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:11.611185304 +0000 UTC m=+1.779963874,LastTimestamp:2026-01-20 09:08:11.611185304 +0000 UTC m=+1.779963874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.263748 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542fc0a6749 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.020549449 +0000 UTC m=+2.189327979,LastTimestamp:2026-01-20 09:08:12.020549449 +0000 UTC m=+2.189327979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.268303 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542fc9d248e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.030166158 +0000 UTC m=+2.198944698,LastTimestamp:2026-01-20 09:08:12.030166158 +0000 UTC m=+2.198944698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.276949 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6542fcb2a8b3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.031576243 +0000 UTC m=+2.200354773,LastTimestamp:2026-01-20 09:08:12.031576243 +0000 UTC m=+2.200354773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.282859 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654308e926d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.236474065 +0000 UTC m=+2.405252595,LastTimestamp:2026-01-20 09:08:12.236474065 +0000 UTC m=+2.405252595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.291378 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543091c2e93 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.239818387 +0000 UTC m=+2.408596917,LastTimestamp:2026-01-20 09:08:12.239818387 +0000 UTC m=+2.408596917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.298346 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6543093a1683 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.241778307 +0000 UTC m=+2.410556837,LastTimestamp:2026-01-20 09:08:12.241778307 +0000 UTC m=+2.410556837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.308991 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c654309a47151 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.248748369 +0000 UTC m=+2.417526899,LastTimestamp:2026-01-20 09:08:12.248748369 +0000 UTC m=+2.417526899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.327304 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6543118ea312 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.381537042 +0000 UTC m=+2.550315572,LastTimestamp:2026-01-20 09:08:12.381537042 +0000 UTC m=+2.550315572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.345330 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6543140ea7cc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.423481292 +0000 UTC m=+2.592259822,LastTimestamp:2026-01-20 09:08:12.423481292 +0000 UTC m=+2.592259822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.355607 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65431c33caac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.56013278 +0000 UTC m=+2.728911320,LastTimestamp:2026-01-20 09:08:12.56013278 +0000 UTC m=+2.728911320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.356671 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.358453 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2" exitCode=255 Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.358525 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2"} Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.358744 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.359461 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.359495 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.359505 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.359779 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:31 crc kubenswrapper[5115]: I0120 09:08:31.360027 5115 scope.go:117] "RemoveContainer" containerID="8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.361450 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c65431c8a9c11 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.565822481 +0000 UTC m=+2.734601011,LastTimestamp:2026-01-20 09:08:12.565822481 +0000 UTC m=+2.734601011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.366841 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65431caef60b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.568204811 +0000 UTC m=+2.736983341,LastTimestamp:2026-01-20 09:08:12.568204811 +0000 UTC m=+2.736983341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.384954 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65431cc7574a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.56980257 +0000 UTC m=+2.738581100,LastTimestamp:2026-01-20 09:08:12.56980257 +0000 UTC m=+2.738581100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.398385 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65431cf31efd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.572671741 +0000 UTC m=+2.741450271,LastTimestamp:2026-01-20 09:08:12.572671741 +0000 UTC m=+2.741450271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.404940 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65431d01f48c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.573643916 +0000 UTC m=+2.742422446,LastTimestamp:2026-01-20 09:08:12.573643916 +0000 UTC m=+2.742422446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.411422 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c65431d9b950f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.583712015 +0000 UTC m=+2.752490545,LastTimestamp:2026-01-20 09:08:12.583712015 +0000 UTC m=+2.752490545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.438125 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65431e024b8a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.590443402 +0000 UTC m=+2.759221922,LastTimestamp:2026-01-20 09:08:12.590443402 +0000 UTC m=+2.759221922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.463328 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65431e386dc6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.59399111 +0000 UTC m=+2.762769640,LastTimestamp:2026-01-20 09:08:12.59399111 +0000 UTC m=+2.762769640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.472608 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65431e47c9f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.594997745 +0000 UTC m=+2.763776275,LastTimestamp:2026-01-20 09:08:12.594997745 +0000 UTC m=+2.763776275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.507877 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65432977c57e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.78269171 +0000 UTC m=+2.951470240,LastTimestamp:2026-01-20 09:08:12.78269171 +0000 UTC m=+2.951470240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.514192 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65432a474ea9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.796292777 +0000 UTC m=+2.965071307,LastTimestamp:2026-01-20 09:08:12.796292777 +0000 UTC m=+2.965071307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.540237 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65432a67fef4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.79843506 +0000 UTC m=+2.967213590,LastTimestamp:2026-01-20 09:08:12.79843506 +0000 UTC m=+2.967213590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.607603 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65432a9c949e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.801881246 +0000 UTC m=+2.970659766,LastTimestamp:2026-01-20 09:08:12.801881246 +0000 UTC m=+2.970659766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.620694 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65432c86e54f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.834014543 +0000 UTC m=+3.002793073,LastTimestamp:2026-01-20 09:08:12.834014543 +0000 UTC m=+3.002793073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.631680 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65432c9c5420 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:12.835419168 +0000 UTC m=+3.004197698,LastTimestamp:2026-01-20 09:08:12.835419168 +0000 UTC m=+3.004197698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.638769 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65433ae06bc5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.074762693 +0000 UTC m=+3.243541223,LastTimestamp:2026-01-20 09:08:13.074762693 +0000 UTC m=+3.243541223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.649819 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65433aec7c98 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.075553432 +0000 UTC m=+3.244331962,LastTimestamp:2026-01-20 09:08:13.075553432 +0000 UTC m=+3.244331962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.656728 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65433b7ffc48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.085219912 +0000 UTC m=+3.253998442,LastTimestamp:2026-01-20 09:08:13.085219912 +0000 UTC m=+3.253998442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.668324 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65433b95c056 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.086646358 +0000 UTC m=+3.255424878,LastTimestamp:2026-01-20 09:08:13.086646358 +0000 UTC m=+3.255424878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.674777 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188c65433b9d375c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.08713558 +0000 UTC m=+3.255914110,LastTimestamp:2026-01-20 09:08:13.08713558 +0000 UTC m=+3.255914110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.679759 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c654345551412 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.250180114 +0000 UTC m=+3.418958644,LastTimestamp:2026-01-20 09:08:13.250180114 +0000 UTC m=+3.418958644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.688960 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434b189713 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.346879251 +0000 UTC m=+3.515657781,LastTimestamp:2026-01-20 09:08:13.346879251 +0000 UTC m=+3.515657781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.694293 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c7d9f5a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.370277722 +0000 UTC m=+3.539056252,LastTimestamp:2026-01-20 09:08:13.370277722 +0000 UTC m=+3.539056252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.700233 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c98db20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,LastTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.705323 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65435549ab60 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.517867872 +0000 UTC m=+3.686646402,LastTimestamp:2026-01-20 09:08:13.517867872 +0000 UTC m=+3.686646402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.709859 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65435739e698 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.550388888 +0000 UTC m=+3.719167408,LastTimestamp:2026-01-20 09:08:13.550388888 +0000 UTC m=+3.719167408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.714351 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435e287e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,LastTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.719215 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435eb090da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,LastTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.724578 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65438321e948 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.287014216 +0000 UTC m=+4.455792756,LastTimestamp:2026-01-20 09:08:14.287014216 +0000 UTC m=+4.455792756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.729018 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c65439559065e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.59261603 +0000 UTC m=+4.761394570,LastTimestamp:2026-01-20 09:08:14.59261603 +0000 UTC m=+4.761394570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.733310 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543961c4507 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.605411591 +0000 UTC m=+4.774190131,LastTimestamp:2026-01-20 09:08:14.605411591 +0000 UTC m=+4.774190131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.739607 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c654396315398 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.606791576 +0000 UTC m=+4.775570106,LastTimestamp:2026-01-20 09:08:14.606791576 +0000 UTC m=+4.775570106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.744997 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543a49a7abb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.848563899 +0000 UTC m=+5.017342449,LastTimestamp:2026-01-20 09:08:14.848563899 +0000 UTC m=+5.017342449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.749119 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543a645870a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.876550922 +0000 UTC m=+5.045329462,LastTimestamp:2026-01-20 09:08:14.876550922 +0000 UTC m=+5.045329462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.754528 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543a657f4e5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:14.877758693 +0000 UTC m=+5.046537233,LastTimestamp:2026-01-20 09:08:14.877758693 +0000 UTC m=+5.046537233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.759711 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543b296d17a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.083204986 +0000 UTC m=+5.251983516,LastTimestamp:2026-01-20 09:08:15.083204986 +0000 UTC m=+5.251983516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.764481 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543b35e64a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.096284324 +0000 UTC m=+5.265062854,LastTimestamp:2026-01-20 09:08:15.096284324 +0000 UTC m=+5.265062854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.770847 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543b36e64c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.097332931 +0000 UTC m=+5.266111481,LastTimestamp:2026-01-20 09:08:15.097332931 +0000 UTC m=+5.266111481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.776540 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543c2682d8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.348583819 +0000 UTC m=+5.517362349,LastTimestamp:2026-01-20 09:08:15.348583819 +0000 UTC m=+5.517362349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.781754 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543c3420d5c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.362862428 +0000 UTC m=+5.531640958,LastTimestamp:2026-01-20 09:08:15.362862428 +0000 UTC m=+5.531640958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.787023 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543c3547c09 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.364070409 +0000 UTC m=+5.532848939,LastTimestamp:2026-01-20 09:08:15.364070409 +0000 UTC m=+5.532848939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.792611 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543cf5fafdc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.566131164 +0000 UTC m=+5.734909694,LastTimestamp:2026-01-20 09:08:15.566131164 +0000 UTC m=+5.734909694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.797825 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188c6543d0071c80 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:15.577103488 +0000 UTC m=+5.745882018,LastTimestamp:2026-01-20 09:08:15.577103488 +0000 UTC m=+5.745882018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.806549 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-controller-manager-crc.188c6544cbc4d31a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:19.800617754 +0000 UTC m=+9.969396284,LastTimestamp:2026-01-20 09:08:19.800617754 +0000 UTC m=+9.969396284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.811772 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c6544cbc658cd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:19.800717517 +0000 UTC m=+9.969496047,LastTimestamp:2026-01-20 09:08:19.800717517 +0000 UTC m=+9.969496047,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.817162 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c65463ccf0ca6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 20 09:08:31 crc kubenswrapper[5115]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 09:08:31 crc kubenswrapper[5115]: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.99208055 +0000 UTC m=+16.160859090,LastTimestamp:2026-01-20 09:08:25.99208055 +0000 UTC m=+16.160859090,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.824008 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65463cd06775 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.992169333 +0000 UTC m=+16.160947873,LastTimestamp:2026-01-20 09:08:25.992169333 +0000 UTC m=+16.160947873,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.831209 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65463ccf0ca6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c65463ccf0ca6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 20 09:08:31 crc kubenswrapper[5115]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 09:08:31 crc kubenswrapper[5115]: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.99208055 +0000 UTC m=+16.160859090,LastTimestamp:2026-01-20 09:08:25.999499043 +0000 UTC m=+16.168277573,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.838163 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65463cd06775\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65463cd06775 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:25.992169333 +0000 UTC m=+16.160947873,LastTimestamp:2026-01-20 09:08:25.999556224 +0000 UTC m=+16.168334754,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.843694 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-controller-manager-crc.188c65471fd4bd95 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:29.800881557 +0000 UTC m=+19.969660117,LastTimestamp:2026-01-20 09:08:29.800881557 +0000 UTC m=+19.969660117,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.849111 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188c65471fd651cf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:29.800985039 +0000 UTC m=+19.969763599,LastTimestamp:2026-01-20 09:08:29.800985039 +0000 UTC m=+19.969763599,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.855077 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c654768a59e81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44822->192.168.126.11:17697: read: connection reset by peer Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022530177 +0000 UTC m=+21.191308707,LastTimestamp:2026-01-20 09:08:31.022530177 +0000 UTC m=+21.191308707,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.858920 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654768a631f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44822->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022567928 +0000 UTC m=+21.191346458,LastTimestamp:2026-01-20 09:08:31.022567928 +0000 UTC m=+21.191346458,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.862756 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c654768a972f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022781174 +0000 UTC m=+21.191559704,LastTimestamp:2026-01-20 09:08:31.022781174 +0000 UTC m=+21.191559704,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.866509 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654768a9bbc6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.022799814 +0000 UTC m=+21.191578344,LastTimestamp:2026-01-20 09:08:31.022799814 +0000 UTC m=+21.191578344,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.872167 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 20 09:08:31 crc kubenswrapper[5115]: &Event{ObjectMeta:{kube-apiserver-crc.188c654768ad87a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer Jan 20 09:08:31 crc kubenswrapper[5115]: body: Jan 20 09:08:31 crc kubenswrapper[5115]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.023048612 +0000 UTC m=+21.191827162,LastTimestamp:2026-01-20 09:08:31.023048612 +0000 UTC m=+21.191827162,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 20 09:08:31 crc kubenswrapper[5115]: > Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.876988 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c654768b02c92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44806->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:31.023221906 +0000 UTC m=+21.192000446,LastTimestamp:2026-01-20 09:08:31.023221906 +0000 UTC m=+21.192000446,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.881835 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65434c98db20\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c98db20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,LastTimestamp:2026-01-20 09:08:31.36119083 +0000 UTC m=+21.529969360,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.893369 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435e287e00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435e287e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,LastTimestamp:2026-01-20 09:08:31.684774634 +0000 UTC m=+21.853553164,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:31 crc kubenswrapper[5115]: E0120 09:08:31.902911 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435eb090da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435eb090da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,LastTimestamp:2026-01-20 09:08:31.695080057 +0000 UTC m=+21.863858577,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.082760 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.363416 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.365383 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e"} Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.365666 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.366290 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.366363 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:32 crc kubenswrapper[5115]: I0120 09:08:32.366387 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:32 crc kubenswrapper[5115]: E0120 09:08:32.367027 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:32 crc kubenswrapper[5115]: E0120 09:08:32.718497 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.082548 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.242338 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.242665 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.244285 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.244343 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.244355 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.244831 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.255950 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.370616 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.371329 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373194 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" exitCode=255 Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373479 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373537 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e"} Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373612 5115 scope.go:117] "RemoveContainer" containerID="8f5d392c8c16bc8dca522160d2028e27d588d5ba566d833fde1e5414c1a50de2" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.373858 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375002 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375053 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375075 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375006 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375176 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.375214 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.376076 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.376239 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:33 crc kubenswrapper[5115]: I0120 09:08:33.376600 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.376913 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:33 crc kubenswrapper[5115]: E0120 09:08:33.385433 5115 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:34 crc kubenswrapper[5115]: I0120 09:08:34.084983 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:34 crc kubenswrapper[5115]: I0120 09:08:34.379040 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.085741 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.975737 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.976054 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.977425 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.977512 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.977536 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:35 crc kubenswrapper[5115]: E0120 09:08:35.978268 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:35 crc kubenswrapper[5115]: I0120 09:08:35.978721 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:35 crc kubenswrapper[5115]: E0120 09:08:35.979107 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:35 crc kubenswrapper[5115]: E0120 09:08:35.986291 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:35.979047718 +0000 UTC m=+26.147826288,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.078588 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.807312 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.807634 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.809074 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.809141 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.809156 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:36 crc kubenswrapper[5115]: E0120 09:08:36.809626 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:36 crc kubenswrapper[5115]: I0120 09:08:36.815512 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.081602 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.381448 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383045 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383120 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383139 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.383177 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.391480 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:37 crc kubenswrapper[5115]: E0120 09:08:37.392151 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.392511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.392552 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:37 crc kubenswrapper[5115]: I0120 09:08:37.392564 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:37 crc kubenswrapper[5115]: E0120 09:08:37.392929 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:38 crc kubenswrapper[5115]: I0120 09:08:38.080445 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:39 crc kubenswrapper[5115]: I0120 09:08:39.086026 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:39 crc kubenswrapper[5115]: E0120 09:08:39.727437 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:40 crc kubenswrapper[5115]: I0120 09:08:40.084125 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.138849 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.265997 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.684474 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:40 crc kubenswrapper[5115]: E0120 09:08:40.989519 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:41 crc kubenswrapper[5115]: I0120 09:08:41.077595 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.084795 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.366791 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.367215 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.368492 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.369084 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.369347 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:42 crc kubenswrapper[5115]: E0120 09:08:42.370326 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:42 crc kubenswrapper[5115]: I0120 09:08:42.371043 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:42 crc kubenswrapper[5115]: E0120 09:08:42.371579 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:42 crc kubenswrapper[5115]: E0120 09:08:42.380603 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:42.371510742 +0000 UTC m=+32.540289322,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:43 crc kubenswrapper[5115]: I0120 09:08:43.085263 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:43 crc kubenswrapper[5115]: E0120 09:08:43.205236 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.084866 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.393128 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394605 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394673 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394689 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:44 crc kubenswrapper[5115]: I0120 09:08:44.394723 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:44 crc kubenswrapper[5115]: E0120 09:08:44.411761 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:45 crc kubenswrapper[5115]: I0120 09:08:45.084690 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:46 crc kubenswrapper[5115]: I0120 09:08:46.082366 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:46 crc kubenswrapper[5115]: E0120 09:08:46.733048 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:47 crc kubenswrapper[5115]: I0120 09:08:47.083008 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:48 crc kubenswrapper[5115]: I0120 09:08:48.085834 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:49 crc kubenswrapper[5115]: I0120 09:08:49.082643 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:50 crc kubenswrapper[5115]: I0120 09:08:50.083714 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:50 crc kubenswrapper[5115]: E0120 09:08:50.267201 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.085562 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.412595 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.413874 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.413968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.413985 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:51 crc kubenswrapper[5115]: I0120 09:08:51.414028 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:51 crc kubenswrapper[5115]: E0120 09:08:51.423966 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:52 crc kubenswrapper[5115]: I0120 09:08:52.084654 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:53 crc kubenswrapper[5115]: I0120 09:08:53.081368 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:53 crc kubenswrapper[5115]: E0120 09:08:53.738432 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:08:54 crc kubenswrapper[5115]: I0120 09:08:54.081061 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.083758 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.216787 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.218081 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.218145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.218160 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.218641 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.219173 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.225485 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65434c98db20\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65434c98db20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.372062496 +0000 UTC m=+3.540841036,LastTimestamp:2026-01-20 09:08:55.221134156 +0000 UTC m=+45.389912686,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.433857 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435e287e00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435e287e00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.666688512 +0000 UTC m=+3.835467042,LastTimestamp:2026-01-20 09:08:55.429202234 +0000 UTC m=+45.597980784,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.442990 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c65435eb090da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c65435eb090da openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:13.675606234 +0000 UTC m=+3.844384764,LastTimestamp:2026-01-20 09:08:55.44081004 +0000 UTC m=+45.609588570,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.451300 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.453298 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb"} Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.453747 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.454458 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.454515 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:55 crc kubenswrapper[5115]: I0120 09:08:55.454528 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:55 crc kubenswrapper[5115]: E0120 09:08:55.454976 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.081296 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.239949 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.460565 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.462152 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464371 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" exitCode=255 Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464445 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb"} Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464494 5115 scope.go:117] "RemoveContainer" containerID="e157d56d2881873558c3a0d9a4b25ce3b65ef2f77f4f1d4eda7729ff24e3dc7e" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.464759 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.465628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.465724 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.465756 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.466465 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:56 crc kubenswrapper[5115]: I0120 09:08:56.467055 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.467540 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:08:56 crc kubenswrapper[5115]: E0120 09:08:56.476989 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:08:56.467473698 +0000 UTC m=+46.636252268,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:08:57 crc kubenswrapper[5115]: I0120 09:08:57.082349 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:57 crc kubenswrapper[5115]: I0120 09:08:57.472987 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:08:58 crc kubenswrapper[5115]: E0120 09:08:58.050701 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.081840 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.424751 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426434 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426640 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426781 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.426991 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:08:58 crc kubenswrapper[5115]: E0120 09:08:58.440186 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.811672 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.812032 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.813323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.813367 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:08:58 crc kubenswrapper[5115]: I0120 09:08:58.813382 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:08:58 crc kubenswrapper[5115]: E0120 09:08:58.813805 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:08:59 crc kubenswrapper[5115]: I0120 09:08:59.084620 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:00 crc kubenswrapper[5115]: I0120 09:09:00.082962 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:00 crc kubenswrapper[5115]: E0120 09:09:00.268120 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:00 crc kubenswrapper[5115]: E0120 09:09:00.745084 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:09:01 crc kubenswrapper[5115]: I0120 09:09:01.081980 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:02 crc kubenswrapper[5115]: I0120 09:09:02.083721 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:03 crc kubenswrapper[5115]: I0120 09:09:03.080648 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:04 crc kubenswrapper[5115]: I0120 09:09:04.082745 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.082332 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.441121 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442522 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442533 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.442567 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.454102 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.454328 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.454955 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.455096 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.455174 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.455184 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.455704 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.456165 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.456563 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.461598 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:09:05.456523977 +0000 UTC m=+55.625302517,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.694467 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.975913 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.976528 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.977885 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.978027 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.978056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.978977 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:05 crc kubenswrapper[5115]: I0120 09:09:05.979520 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.979908 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:05 crc kubenswrapper[5115]: E0120 09:09:05.988224 5115 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188c6547f4f94bb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188c6547f4f94bb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:08:33.376824243 +0000 UTC m=+23.545602773,LastTimestamp:2026-01-20 09:09:05.979830559 +0000 UTC m=+56.148609119,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:09:06 crc kubenswrapper[5115]: I0120 09:09:06.082764 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:07 crc kubenswrapper[5115]: I0120 09:09:07.085019 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:07 crc kubenswrapper[5115]: E0120 09:09:07.678930 5115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 09:09:07 crc kubenswrapper[5115]: E0120 09:09:07.752073 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:09:08 crc kubenswrapper[5115]: I0120 09:09:08.081236 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:09 crc kubenswrapper[5115]: I0120 09:09:09.085610 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:10 crc kubenswrapper[5115]: I0120 09:09:10.082288 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:10 crc kubenswrapper[5115]: E0120 09:09:10.268933 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:11 crc kubenswrapper[5115]: I0120 09:09:11.082657 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.083046 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.455513 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457653 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457706 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457729 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:12 crc kubenswrapper[5115]: I0120 09:09:12.457758 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:09:12 crc kubenswrapper[5115]: E0120 09:09:12.469233 5115 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 20 09:09:13 crc kubenswrapper[5115]: I0120 09:09:13.082441 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:14 crc kubenswrapper[5115]: I0120 09:09:14.080340 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:14 crc kubenswrapper[5115]: E0120 09:09:14.758799 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.082289 5115 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.642006 5115 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-6tc4b" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.648807 5115 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-6tc4b" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.739444 5115 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 09:09:15 crc kubenswrapper[5115]: I0120 09:09:15.990253 5115 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 09:09:16 crc kubenswrapper[5115]: I0120 09:09:16.650251 5115 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-19 09:04:15 +0000 UTC" deadline="2026-02-11 10:22:01.71770075 +0000 UTC" Jan 20 09:09:16 crc kubenswrapper[5115]: I0120 09:09:16.650348 5115 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="529h12m45.067359934s" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.470035 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.471603 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.471749 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.471819 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.472071 5115 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.485210 5115 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.485790 5115 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.485934 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489635 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489691 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489702 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489722 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.489736 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.502803 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513786 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513838 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513851 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513867 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.513878 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.522653 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530251 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530264 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530281 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.530293 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.541163 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547520 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547583 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547598 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547621 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:19 crc kubenswrapper[5115]: I0120 09:09:19.547635 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:19Z","lastTransitionTime":"2026-01-20T09:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.556548 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.556710 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.556748 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.656943 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.758131 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.858331 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:19 crc kubenswrapper[5115]: E0120 09:09:19.959066 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.059382 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.160378 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.261507 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.269963 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.361773 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.462931 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.564166 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.664500 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.764700 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.866079 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:20 crc kubenswrapper[5115]: E0120 09:09:20.967347 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.068602 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.168949 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.216987 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.217853 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.217882 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.217914 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.218371 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.218627 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.269667 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.370077 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.470993 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.549509 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.551675 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b"} Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.551906 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.552594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.552634 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:21 crc kubenswrapper[5115]: I0120 09:09:21.552647 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.553087 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.571378 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.672318 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.772755 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.873831 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:21 crc kubenswrapper[5115]: E0120 09:09:21.974695 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.075204 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.175378 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.275786 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.375981 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.476111 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.556283 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.556761 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.558670 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" exitCode=255 Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.558750 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b"} Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.558858 5115 scope.go:117] "RemoveContainer" containerID="df32e00f083482ec09df9e5a364f853a077b7da4bc1f27c5f26092bc413089cb" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559088 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559728 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559766 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.559776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.560199 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:22 crc kubenswrapper[5115]: I0120 09:09:22.560526 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.560813 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.576773 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.677130 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.778157 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.878533 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:22 crc kubenswrapper[5115]: E0120 09:09:22.979492 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.080726 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.181578 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.281811 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.382730 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.482996 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: I0120 09:09:23.563838 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.583458 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.683799 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.785233 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.885783 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:23 crc kubenswrapper[5115]: E0120 09:09:23.986605 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.087383 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.188382 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.289198 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.389880 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.491041 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.592099 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.693094 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.794350 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.895216 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:24 crc kubenswrapper[5115]: E0120 09:09:24.995749 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.096466 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.196840 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.297637 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.398193 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.499009 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.600069 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.700487 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.801688 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.901839 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.975591 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.976003 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.977240 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.977305 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.977327 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.978065 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:25 crc kubenswrapper[5115]: I0120 09:09:25.978524 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:25 crc kubenswrapper[5115]: E0120 09:09:25.978870 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.002647 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.103714 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.204658 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.305758 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.406713 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.507290 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.608125 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.708631 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.808769 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:26 crc kubenswrapper[5115]: E0120 09:09:26.908920 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.009678 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.110530 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.211732 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.312791 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.413663 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.514107 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.614737 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.715058 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.815235 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:27 crc kubenswrapper[5115]: E0120 09:09:27.915450 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.016844 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.118004 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.218858 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.319642 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.420669 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.521110 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.621626 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.722046 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.823280 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:28 crc kubenswrapper[5115]: E0120 09:09:28.924120 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.024723 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.125032 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.225871 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.326501 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.426918 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.527667 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.628846 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.729138 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.829998 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.888764 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895526 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895567 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895576 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.895628 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.911953 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926547 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926630 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926651 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926679 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.926699 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.943766 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956244 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956399 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956430 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956466 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.956491 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:29 crc kubenswrapper[5115]: E0120 09:09:29.971028 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.983952 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984047 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984063 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984096 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:29 crc kubenswrapper[5115]: I0120 09:09:29.984113 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:29Z","lastTransitionTime":"2026-01-20T09:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.002271 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.002459 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.002499 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.103565 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.203769 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.271210 5115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.304490 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.405560 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.505975 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.607176 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.707556 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.808142 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:30 crc kubenswrapper[5115]: E0120 09:09:30.908511 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.009117 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.110091 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.210316 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.311179 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.411705 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.511876 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.552603 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.553103 5115 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.554516 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.554610 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.554633 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.555525 5115 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 20 09:09:31 crc kubenswrapper[5115]: I0120 09:09:31.556004 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.556429 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.612827 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.713769 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.814771 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:31 crc kubenswrapper[5115]: E0120 09:09:31.915957 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.016561 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.117417 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.217946 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.318590 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.418816 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.519811 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.620596 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.720832 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.821459 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:32 crc kubenswrapper[5115]: E0120 09:09:32.922000 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.023000 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.123382 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.224032 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.325204 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.425798 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.526533 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.626975 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.728031 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.828728 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:33 crc kubenswrapper[5115]: E0120 09:09:33.928958 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.029648 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.130807 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.230980 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.331586 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.431797 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.532392 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.633157 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.734201 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.834474 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:34 crc kubenswrapper[5115]: E0120 09:09:34.934676 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.035490 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.136398 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.237215 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.337865 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.438017 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.538655 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.639116 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.740276 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.841309 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:35 crc kubenswrapper[5115]: E0120 09:09:35.941866 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.042374 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.143564 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.243774 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.344319 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.444561 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.545328 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.646379 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.746876 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.847703 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:36 crc kubenswrapper[5115]: E0120 09:09:36.948279 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.048756 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.149963 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.250674 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.351256 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.452258 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: E0120 09:09:37.552879 5115 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.640639 5115 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656414 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656477 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656490 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.656527 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.693564 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.709605 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759093 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759147 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759160 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759178 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.759192 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.809070 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862303 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862413 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862437 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.862480 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.911535 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966374 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966392 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966412 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:37 crc kubenswrapper[5115]: I0120 09:09:37.966427 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:37Z","lastTransitionTime":"2026-01-20T09:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.013604 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069696 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069717 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069743 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.069762 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.121302 5115 apiserver.go:52] "Watching apiserver" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.130252 5115 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.131914 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-pnd9p","openshift-image-registry/node-ca-5tt8v","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-xjql7","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/machine-config-daemon-zvfcd","openshift-multus/multus-additional-cni-plugins-bmvv2","openshift-multus/network-metrics-daemon-tzrjx","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7","openshift-dns/node-resolver-bht7q","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.133502 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.134250 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.134325 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.134242 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.136621 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.136999 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.139293 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.143936 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.145642 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.146924 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.148240 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.149688 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.149839 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.149875 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.157822 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.157847 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.158494 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.158789 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159010 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159506 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159650 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159677 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.159517 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.160492 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.161204 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.161564 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.164039 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.164432 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.164911 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.168457 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.169136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.171473 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.171754 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.171820 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.172104 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173065 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173108 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173122 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173109 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173142 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173157 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173132 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173387 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173636 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.173784 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.174521 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.174524 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.174678 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.176003 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.176835 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.177195 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.179379 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181015 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181105 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181370 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.181498 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.182002 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184159 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184343 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184442 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.184952 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.185956 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.186810 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.188002 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.188306 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.194976 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-kubelet\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195012 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-netns\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195036 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195063 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9ld\" (UniqueName: \"kubernetes.io/projected/5976ec5f-b09c-4f83-802d-6042842fd8e6-kube-api-access-tt9ld\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195087 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-systemd-units\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195106 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-ovn\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195123 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-netd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195152 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195210 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195251 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-var-lib-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195280 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195316 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-script-lib\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195345 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92f344d4-34bc-4412-83c9-6b7beb45db64-serviceca\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195480 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kn4\" (UniqueName: \"kubernetes.io/projected/0b51ef97-33e0-4889-bd54-ac4be09c39e7-kube-api-access-f9kn4\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195504 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195541 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-config\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195568 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195590 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-etc-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195611 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195644 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9bt\" (UniqueName: \"kubernetes.io/projected/650d165f-75fb-4a16-a8fa-d8366b5f6eea-kube-api-access-2p9bt\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195681 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195716 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-systemd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.195970 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.196373 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197057 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.196424 5115 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197164 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197310 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197363 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197512 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197584 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197631 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-node-log\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197690 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197747 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197797 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197843 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-log-socket\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.197933 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92f344d4-34bc-4412-83c9-6b7beb45db64-host\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.198106 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199566 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/650d165f-75fb-4a16-a8fa-d8366b5f6eea-hosts-file\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.198826 5115 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.199631 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.699603288 +0000 UTC m=+88.868381818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199683 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/650d165f-75fb-4a16-a8fa-d8366b5f6eea-tmp-dir\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199709 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-bin\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199728 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwps7\" (UniqueName: \"kubernetes.io/projected/92f344d4-34bc-4412-83c9-6b7beb45db64-kube-api-access-rwps7\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199778 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.199799 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcxt\" (UniqueName: \"kubernetes.io/projected/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-kube-api-access-wtcxt\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.198776 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.199939 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200149 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-slash\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.200254 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.700201594 +0000 UTC m=+88.868980164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200323 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-env-overrides\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200388 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.200466 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.201698 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.214411 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.214503 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.220045 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.220592 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222331 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222372 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222390 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.222516 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.722487231 +0000 UTC m=+88.891265781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.231108 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.232492 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.232798 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.242385 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.254632 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.264125 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.271515 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275567 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275618 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275634 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275657 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.275670 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.281145 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.291374 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.299882 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301149 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301204 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301239 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301272 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301324 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301351 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301379 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301407 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301432 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301475 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301504 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301528 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301558 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301603 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301649 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301673 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301698 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301720 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301743 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301770 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301795 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301817 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301841 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301864 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301917 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301954 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.301978 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302002 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302029 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302060 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302085 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302109 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302133 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302167 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302199 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302231 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302267 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302294 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302325 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302352 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302375 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302398 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302427 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302463 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302495 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302528 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302555 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302604 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302628 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302651 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302675 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302701 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302725 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302756 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302818 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302846 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302881 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302929 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302956 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.302981 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303007 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303039 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303066 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303070 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303094 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303130 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303162 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303220 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303270 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303305 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303380 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303418 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303454 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303489 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303520 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303544 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303589 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303626 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303673 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303692 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303714 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303757 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303792 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303828 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303840 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303875 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303935 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.303972 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304013 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304048 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304096 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304252 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304276 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304322 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304749 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304800 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304874 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304952 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304987 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.304359 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305128 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305156 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305215 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305333 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305445 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305545 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.305631 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306025 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306130 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306176 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306215 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306253 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306290 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306327 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306363 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306406 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306446 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306692 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306744 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306772 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306797 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306827 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306853 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306877 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306932 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.306974 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307015 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307031 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307126 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307171 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307213 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307248 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307285 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307325 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307370 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307407 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307443 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307477 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307514 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307552 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307593 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307630 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307675 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307712 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307754 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307791 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307830 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308531 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308628 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308659 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308691 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308735 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308982 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307142 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307182 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307219 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307724 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307711 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307819 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.307994 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308631 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308647 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308671 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309096 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308590 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.308455 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.309498 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.809119763 +0000 UTC m=+88.977898303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311330 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311346 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311352 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309780 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309958 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309992 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310007 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310008 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311471 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310029 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310159 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310322 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310348 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310659 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310690 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310728 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.310987 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311006 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311378 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311767 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311797 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311810 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.311971 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.312024 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.312343 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.312542 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313045 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313290 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313502 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313678 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313695 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.313773 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314244 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314278 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314493 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.314843 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315016 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315174 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315408 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315516 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.315664 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316109 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316220 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316302 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.309698 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316606 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316762 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.316792 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317118 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317248 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317369 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317466 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317515 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317569 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317597 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317652 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317733 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317761 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317788 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317816 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317843 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317870 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317915 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317944 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317972 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.317995 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318019 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318043 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318068 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318092 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318118 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318125 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318142 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318170 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318183 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318404 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318206 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318213 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318230 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318325 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318321 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318445 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318586 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318809 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319004 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.318564 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319106 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319171 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319452 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319519 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319684 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.319996 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320048 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320133 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320161 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320210 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320250 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320251 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320332 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320494 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320593 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320686 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320705 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320694 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320819 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320880 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320928 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320960 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.320996 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321142 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321166 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321181 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321204 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321213 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321268 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321270 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321360 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321401 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321409 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321452 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321501 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321542 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321580 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321619 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321658 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321686 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321692 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321757 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321814 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321880 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321929 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321953 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321985 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.321991 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322006 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322028 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322049 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322071 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322085 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322092 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322240 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322327 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322391 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322417 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322459 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322609 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322611 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322626 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322726 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322758 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322823 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322855 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.322946 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323004 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323032 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323064 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323073 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323094 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323127 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323261 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323300 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323343 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323363 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323382 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323092 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324707 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323189 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323241 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323481 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323546 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323665 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323808 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323889 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.323955 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324121 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324359 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324452 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324679 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324693 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324834 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324857 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324876 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.324690 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325235 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325525 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325527 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327351 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327355 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327391 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327440 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327584 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327596 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-bin\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325688 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.325691 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327637 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rwps7\" (UniqueName: \"kubernetes.io/projected/92f344d4-34bc-4412-83c9-6b7beb45db64-kube-api-access-rwps7\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326003 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326166 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326338 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327677 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326462 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326469 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326615 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326740 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326813 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327154 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.326558 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327206 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327713 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327845 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-cnibin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327876 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327909 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-daemon-config\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327923 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.327944 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zmmw\" (UniqueName: \"kubernetes.io/projected/f41177fd-db48-43c1-9a8d-69cad41d3fab-kube-api-access-6zmmw\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328128 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcxt\" (UniqueName: \"kubernetes.io/projected/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-kube-api-access-wtcxt\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328168 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-slash\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328204 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-env-overrides\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328150 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328282 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328313 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g8mg\" (UniqueName: \"kubernetes.io/projected/dc89765b-3b00-4f86-ae67-a5088c182918-kube-api-access-7g8mg\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328340 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-multus\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328373 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-kubelet\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328399 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-netns\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328430 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328462 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-etc-kubernetes\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328469 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328502 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tt9ld\" (UniqueName: \"kubernetes.io/projected/5976ec5f-b09c-4f83-802d-6042842fd8e6-kube-api-access-tt9ld\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328538 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-systemd-units\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328570 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-ovn\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328737 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-netd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329103 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-cnibin\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329212 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-netns\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329212 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-slash\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329247 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-kubelet\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328505 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328608 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328669 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328690 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.328748 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329102 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329184 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.329446 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-var-lib-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330210 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-env-overrides\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330236 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-netd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330291 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-systemd-units\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330308 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-ovn\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330274 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330324 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-var-lib-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330389 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330488 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-script-lib\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330533 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92f344d4-34bc-4412-83c9-6b7beb45db64-serviceca\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330655 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kn4\" (UniqueName: \"kubernetes.io/projected/0b51ef97-33e0-4889-bd54-ac4be09c39e7-kube-api-access-f9kn4\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330703 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-system-cni-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330739 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-config\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.330768 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331080 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-cni-bin\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331309 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331402 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331705 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.331868 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332132 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332149 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332218 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-etc-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332305 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-etc-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332315 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332389 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9bt\" (UniqueName: \"kubernetes.io/projected/650d165f-75fb-4a16-a8fa-d8366b5f6eea-kube-api-access-2p9bt\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332433 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-os-release\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332462 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332490 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc89765b-3b00-4f86-ae67-a5088c182918-mcd-auth-proxy-config\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332526 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-systemd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332557 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-socket-dir-parent\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332581 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-k8s-cni-cncf-io\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332624 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332654 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.332683 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333057 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333301 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-systemd\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333413 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333439 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-config\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.333521 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.333600 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.833579749 +0000 UTC m=+89.002358279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.333632 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.334299 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.335936 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.336279 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovn-node-metrics-cert\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.337996 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b51ef97-33e0-4889-bd54-ac4be09c39e7-ovnkube-script-lib\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.338986 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339108 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339146 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc89765b-3b00-4f86-ae67-a5088c182918-proxy-tls\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339177 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-system-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339210 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-node-log\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339283 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h55j\" (UniqueName: \"kubernetes.io/projected/4b42cc5a-50db-4588-8149-e758f33704ef-kube-api-access-7h55j\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339311 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-os-release\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339337 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-netns\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339360 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-bin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339384 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-hostroot\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339414 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339439 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339462 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339483 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dc89765b-3b00-4f86-ae67-a5088c182918-rootfs\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339505 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-cni-binary-copy\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339533 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-conf-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339556 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-multus-certs\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339582 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-log-socket\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339605 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92f344d4-34bc-4412-83c9-6b7beb45db64-host\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339629 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/650d165f-75fb-4a16-a8fa-d8366b5f6eea-hosts-file\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339654 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/650d165f-75fb-4a16-a8fa-d8366b5f6eea-tmp-dir\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339678 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-kubelet\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339823 5115 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339842 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339856 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339870 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339883 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339915 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339931 5115 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339945 5115 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339961 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339976 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.339990 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340003 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340016 5115 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340030 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340044 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340059 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340072 5115 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340088 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340104 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340120 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340133 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340147 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340163 5115 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340176 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340189 5115 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340205 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340217 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340230 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340243 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340256 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340268 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340281 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340293 5115 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340307 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340319 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340334 5115 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340347 5115 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340360 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340373 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340386 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340401 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340415 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340431 5115 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340444 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340457 5115 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340470 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340483 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340501 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340515 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340567 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340594 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340609 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340624 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340639 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340653 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340669 5115 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340684 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340700 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.346950 5115 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347023 5115 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347043 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347101 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347117 5115 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347132 5115 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347191 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347227 5115 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347241 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347256 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347270 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347284 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347303 5115 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347315 5115 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347331 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347345 5115 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347357 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347373 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347386 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347400 5115 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347413 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347427 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347441 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347456 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347471 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347486 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347500 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347665 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347690 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347708 5115 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347727 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347746 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347762 5115 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347778 5115 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347796 5115 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347814 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347830 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347847 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347864 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347882 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347958 5115 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347978 5115 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.347994 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348011 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348029 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348049 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348047 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348066 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341546 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348088 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.340737 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348107 5115 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341426 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-run-openvswitch\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348124 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348145 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348162 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348180 5115 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348199 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348217 5115 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348234 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348252 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348276 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348295 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348313 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348332 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348350 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348369 5115 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348389 5115 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348406 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348422 5115 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348440 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348595 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348610 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348623 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348636 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348649 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348662 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348675 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348688 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348702 5115 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348758 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348772 5115 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348815 5115 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348832 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348846 5115 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348860 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348873 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348888 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348918 5115 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348931 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348948 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.342006 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/650d165f-75fb-4a16-a8fa-d8366b5f6eea-tmp-dir\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341658 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/92f344d4-34bc-4412-83c9-6b7beb45db64-host\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341629 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-log-socket\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341539 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b51ef97-33e0-4889-bd54-ac4be09c39e7-node-log\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.348967 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349042 5115 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.341642 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/650d165f-75fb-4a16-a8fa-d8366b5f6eea-hosts-file\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349076 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349092 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349110 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349125 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349129 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349140 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349172 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349187 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349202 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349218 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349233 5115 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349256 5115 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349267 5115 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349278 5115 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349288 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349299 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349309 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349351 5115 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349362 5115 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349372 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349383 5115 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349393 5115 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349403 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349414 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349425 5115 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349435 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349446 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349456 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349465 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.349808 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/92f344d4-34bc-4412-83c9-6b7beb45db64-serviceca\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.350733 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.350817 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.351078 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.351019 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.351187 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351520 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351546 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351562 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.351646 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:38.851620421 +0000 UTC m=+89.020398951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352188 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352503 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.352657 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.353098 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.353120 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.354208 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.354567 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.354692 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.355555 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5976ec5f-b09c-4f83-802d-6042842fd8e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356001 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356248 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356381 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356385 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356871 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.356973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt9ld\" (UniqueName: \"kubernetes.io/projected/5976ec5f-b09c-4f83-802d-6042842fd8e6-kube-api-access-tt9ld\") pod \"ovnkube-control-plane-57b78d8988-sfqm7\" (UID: \"5976ec5f-b09c-4f83-802d-6042842fd8e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357029 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357045 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357583 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357639 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357671 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357772 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.357943 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358109 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358434 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358482 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358554 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.358823 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwps7\" (UniqueName: \"kubernetes.io/projected/92f344d4-34bc-4412-83c9-6b7beb45db64-kube-api-access-rwps7\") pod \"node-ca-5tt8v\" (UID: \"92f344d4-34bc-4412-83c9-6b7beb45db64\") " pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359020 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359076 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359423 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359475 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359587 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359715 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359802 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcxt\" (UniqueName: \"kubernetes.io/projected/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-kube-api-access-wtcxt\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.359820 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.360514 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kn4\" (UniqueName: \"kubernetes.io/projected/0b51ef97-33e0-4889-bd54-ac4be09c39e7-kube-api-access-f9kn4\") pod \"ovnkube-node-pnd9p\" (UID: \"0b51ef97-33e0-4889-bd54-ac4be09c39e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.360933 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9bt\" (UniqueName: \"kubernetes.io/projected/650d165f-75fb-4a16-a8fa-d8366b5f6eea-kube-api-access-2p9bt\") pod \"node-resolver-bht7q\" (UID: \"650d165f-75fb-4a16-a8fa-d8366b5f6eea\") " pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.362244 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.362298 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.362572 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.365492 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.366072 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.375596 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.376650 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379561 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379623 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379655 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.379673 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.385214 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.395203 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.402924 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.405620 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.406697 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.415827 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.428768 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.440081 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450511 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7g8mg\" (UniqueName: \"kubernetes.io/projected/dc89765b-3b00-4f86-ae67-a5088c182918-kube-api-access-7g8mg\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450556 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-multus\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450603 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-multus\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450796 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-etc-kubernetes\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450863 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-etc-kubernetes\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.450972 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-cnibin\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451060 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-system-cni-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451111 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-cnibin\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451126 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-os-release\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451158 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-system-cni-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451272 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-os-release\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451154 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451360 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc89765b-3b00-4f86-ae67-a5088c182918-mcd-auth-proxy-config\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451391 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-socket-dir-parent\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451620 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-k8s-cni-cncf-io\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451673 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-k8s-cni-cncf-io\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451639 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-socket-dir-parent\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.451782 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452020 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc89765b-3b00-4f86-ae67-a5088c182918-proxy-tls\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452074 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-system-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452040 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4b42cc5a-50db-4588-8149-e758f33704ef-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452122 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452126 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc89765b-3b00-4f86-ae67-a5088c182918-mcd-auth-proxy-config\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7h55j\" (UniqueName: \"kubernetes.io/projected/4b42cc5a-50db-4588-8149-e758f33704ef-kube-api-access-7h55j\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452209 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-os-release\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452211 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-system-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452243 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-netns\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452278 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-bin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452314 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-hostroot\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452351 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-cni-bin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452286 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-os-release\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452318 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-netns\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452364 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452429 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dc89765b-3b00-4f86-ae67-a5088c182918-rootfs\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452397 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-hostroot\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452460 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-cni-binary-copy\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452493 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dc89765b-3b00-4f86-ae67-a5088c182918-rootfs\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452494 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-conf-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452548 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-multus-certs\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452575 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-conf-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452576 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-kubelet\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452602 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-var-lib-kubelet\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452639 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-host-run-multus-certs\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452641 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452683 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452715 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-cnibin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452752 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-daemon-config\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452785 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zmmw\" (UniqueName: \"kubernetes.io/projected/f41177fd-db48-43c1-9a8d-69cad41d3fab-kube-api-access-6zmmw\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452952 5115 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452975 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452996 5115 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.452989 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-cnibin\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453020 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453059 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-cni-binary-copy\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453064 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-cni-binary-copy\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453091 5115 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453150 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/4b42cc5a-50db-4588-8149-e758f33704ef-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453194 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453522 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-cni-dir\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453847 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453878 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453943 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453964 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.453982 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454001 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454018 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454036 5115 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454053 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454072 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454096 5115 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454118 5115 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454137 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454156 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454177 5115 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454195 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454213 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454231 5115 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454247 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454265 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454282 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454303 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454320 5115 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454337 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454353 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454369 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454385 5115 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454402 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454418 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454608 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454628 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454645 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454662 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454681 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454700 5115 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454717 5115 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454739 5115 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454758 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454778 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454799 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454817 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454835 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.454552 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f41177fd-db48-43c1-9a8d-69cad41d3fab-multus-daemon-config\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.457361 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc89765b-3b00-4f86-ae67-a5088c182918-proxy-tls\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.466837 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.473982 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h55j\" (UniqueName: \"kubernetes.io/projected/4b42cc5a-50db-4588-8149-e758f33704ef-kube-api-access-7h55j\") pod \"multus-additional-cni-plugins-bmvv2\" (UID: \"4b42cc5a-50db-4588-8149-e758f33704ef\") " pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.474081 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.475360 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zmmw\" (UniqueName: \"kubernetes.io/projected/f41177fd-db48-43c1-9a8d-69cad41d3fab-kube-api-access-6zmmw\") pod \"multus-xjql7\" (UID: \"f41177fd-db48-43c1-9a8d-69cad41d3fab\") " pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.477847 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g8mg\" (UniqueName: \"kubernetes.io/projected/dc89765b-3b00-4f86-ae67-a5088c182918-kube-api-access-7g8mg\") pod \"machine-config-daemon-zvfcd\" (UID: \"dc89765b-3b00-4f86-ae67-a5088c182918\") " pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.477793 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.482071 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484192 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484242 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484263 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484288 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.484308 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.484803 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: source /etc/kubernetes/apiserver-url.env Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.486045 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.487819 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f WatchSource:0}: Error finding container 89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f: Status 404 returned error can't find the container with id 89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.492332 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xjql7" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.494549 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.495306 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 20 09:09:38 crc kubenswrapper[5115]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 20 09:09:38 crc kubenswrapper[5115]: ho_enable="--enable-hybrid-overlay" Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 20 09:09:38 crc kubenswrapper[5115]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 20 09:09:38 crc kubenswrapper[5115]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-host=127.0.0.1 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-port=9743 \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ho_enable} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-approver \ Jan 20 09:09:38 crc kubenswrapper[5115]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --wait-for-kubernetes-api=200s \ Jan 20 09:09:38 crc kubenswrapper[5115]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.496219 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.499800 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.501293 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5tt8v" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.504474 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-webhook \ Jan 20 09:09:38 crc kubenswrapper[5115]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.506138 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.512269 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc89765b_3b00_4f86_ae67_a5088c182918.slice/crio-29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c WatchSource:0}: Error finding container 29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c: Status 404 returned error can't find the container with id 29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.513963 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.515245 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf41177fd_db48_43c1_9a8d_69cad41d3fab.slice/crio-659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2 WatchSource:0}: Error finding container 659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2: Status 404 returned error can't find the container with id 659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2 Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.515317 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.516582 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 20 09:09:38 crc kubenswrapper[5115]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 20 09:09:38 crc kubenswrapper[5115]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zmmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-xjql7_openshift-multus(f41177fd-db48-43c1-9a8d-69cad41d3fab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.517320 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.518565 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-xjql7" podUID="f41177fd-db48-43c1-9a8d-69cad41d3fab" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.518712 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.522928 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: while [ true ]; Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: for f in $(ls /tmp/serviceca); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $f Jan 20 09:09:38 crc kubenswrapper[5115]: ca_file_path="/tmp/serviceca/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ -e "${reg_dir_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: mkdir $reg_dir_path Jan 20 09:09:38 crc kubenswrapper[5115]: cp $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: for d in $(ls /etc/docker/certs.d); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $d Jan 20 09:09:38 crc kubenswrapper[5115]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_conf_path="/tmp/serviceca/${dp}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ ! -e "${reg_conf_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: rm -rf /etc/docker/certs.d/$d Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait ${!} Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwps7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-5tt8v_openshift-image-registry(92f344d4-34bc-4412-83c9-6b7beb45db64): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.524091 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-5tt8v" podUID="92f344d4-34bc-4412-83c9-6b7beb45db64" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.527411 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.529942 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bht7q" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.538420 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.542333 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod650d165f_75fb_4a16_a8fa_d8366b5f6eea.slice/crio-e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153 WatchSource:0}: Error finding container e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153: Status 404 returned error can't find the container with id e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153 Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.544346 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -uo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: HOSTS_FILE="/etc/hosts" Jan 20 09:09:38 crc kubenswrapper[5115]: TEMP_FILE="/tmp/hosts.tmp" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Make a temporary file with the old hosts file's attributes. Jan 20 09:09:38 crc kubenswrapper[5115]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Failed to preserve hosts file. Exiting." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: while true; do Jan 20 09:09:38 crc kubenswrapper[5115]: declare -A svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${services[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: # Fetch service IP from cluster dns if present. We make several tries Jan 20 09:09:38 crc kubenswrapper[5115]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 20 09:09:38 crc kubenswrapper[5115]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 20 09:09:38 crc kubenswrapper[5115]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 20 09:09:38 crc kubenswrapper[5115]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 20 09:09:38 crc kubenswrapper[5115]: for i in ${!cmds[*]} Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: ips=($(eval "${cmds[i]}")) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: svc_ips["${svc}"]="${ips[@]}" Jan 20 09:09:38 crc kubenswrapper[5115]: break Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Update /etc/hosts only if we get valid service IPs Jan 20 09:09:38 crc kubenswrapper[5115]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 20 09:09:38 crc kubenswrapper[5115]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 20 09:09:38 crc kubenswrapper[5115]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Append resolver entries for services Jan 20 09:09:38 crc kubenswrapper[5115]: rc=0 Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${!svc_ips[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: for ip in ${svc_ips[${svc}]}; do Jan 20 09:09:38 crc kubenswrapper[5115]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ $rc -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 20 09:09:38 crc kubenswrapper[5115]: # Replace /etc/hosts with our modified version if needed Jan 20 09:09:38 crc kubenswrapper[5115]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 20 09:09:38 crc kubenswrapper[5115]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: unset svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p9bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bht7q_openshift-dns(650d165f-75fb-4a16-a8fa-d8366b5f6eea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.544364 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.546150 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bht7q" podUID="650d165f-75fb-4a16-a8fa-d8366b5f6eea" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.554460 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b42cc5a_50db_4588_8149_e758f33704ef.slice/crio-ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d WatchSource:0}: Error finding container ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d: Status 404 returned error can't find the container with id ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.555621 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.557077 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h55j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bmvv2_openshift-multus(4b42cc5a-50db-4588-8149-e758f33704ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.558167 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" podUID="4b42cc5a-50db-4588-8149-e758f33704ef" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.558802 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.562929 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.571131 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 20 09:09:38 crc kubenswrapper[5115]: apiVersion: v1 Jan 20 09:09:38 crc kubenswrapper[5115]: clusters: Jan 20 09:09:38 crc kubenswrapper[5115]: - cluster: Jan 20 09:09:38 crc kubenswrapper[5115]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: server: https://api-int.crc.testing:6443 Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: contexts: Jan 20 09:09:38 crc kubenswrapper[5115]: - context: Jan 20 09:09:38 crc kubenswrapper[5115]: cluster: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: namespace: default Jan 20 09:09:38 crc kubenswrapper[5115]: user: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: current-context: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: kind: Config Jan 20 09:09:38 crc kubenswrapper[5115]: preferences: {} Jan 20 09:09:38 crc kubenswrapper[5115]: users: Jan 20 09:09:38 crc kubenswrapper[5115]: - name: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: user: Jan 20 09:09:38 crc kubenswrapper[5115]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: EOF Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9kn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-pnd9p_openshift-ovn-kubernetes(0b51ef97-33e0-4889-bd54-ac4be09c39e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.572397 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" podUID="0b51ef97-33e0-4889-bd54-ac4be09c39e7" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.575306 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: W0120 09:09:38.579039 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5976ec5f_b09c_4f83_802d_6042842fd8e6.slice/crio-25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf WatchSource:0}: Error finding container 25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf: Status 404 returned error can't find the container with id 25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.581591 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -euo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 20 09:09:38 crc kubenswrapper[5115]: # As the secret mount is optional we must wait for the files to be present. Jan 20 09:09:38 crc kubenswrapper[5115]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 20 09:09:38 crc kubenswrapper[5115]: TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=0 Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs(){ Jan 20 09:09:38 crc kubenswrapper[5115]: CUR_TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: } Jan 20 09:09:38 crc kubenswrapper[5115]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 5 Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/kube-rbac-proxy \ Jan 20 09:09:38 crc kubenswrapper[5115]: --logtostderr \ Jan 20 09:09:38 crc kubenswrapper[5115]: --secure-listen-address=:9108 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --upstream=http://127.0.0.1:29108/ \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-private-key-file=${TLS_PK} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cert-file=${TLS_CERT} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.584237 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # This is needed so that converting clusters from GA to TP Jan 20 09:09:38 crc kubenswrapper[5115]: # will rollout control plane pods as well Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" != "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable multi-network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable admin network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: if [ "shared" == "shared" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode shared" Jan 20 09:09:38 crc kubenswrapper[5115]: elif [ "shared" == "local" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode local" Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --init-cluster-manager "${K8S_NODE}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-bind-address "127.0.0.1:29108" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-pprof \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-config-duration \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${dns_name_resolver_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${persistent_ips_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${network_segmentation_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${gateway_mode_flags} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${route_advertisements_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${preconfigured_udn_addresses_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-ip=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-firewall=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-qos=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-service=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multicast \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multi-external-gateway=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_policy_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${admin_network_policy_enabled_flag} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.586093 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podUID="5976ec5f-b09c-4f83-802d-6042842fd8e6" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593117 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593672 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593692 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593716 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.593737 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.610297 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"89a1923678c192fbf3a8fa027b144dadcc5e7008b1288bb632d710d9da597b3f"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.612657 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerStarted","Data":"ec90b51c7f2a26c46864e33dac72b099ba300ae018c17039c97afc265a44269d"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.612774 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 20 09:09:38 crc kubenswrapper[5115]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 20 09:09:38 crc kubenswrapper[5115]: ho_enable="--enable-hybrid-overlay" Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 20 09:09:38 crc kubenswrapper[5115]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 20 09:09:38 crc kubenswrapper[5115]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-host=127.0.0.1 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --webhook-port=9743 \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ho_enable} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-approver \ Jan 20 09:09:38 crc kubenswrapper[5115]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --wait-for-kubernetes-api=200s \ Jan 20 09:09:38 crc kubenswrapper[5115]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.613692 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"6d71e67b9b21d106693ff03a675acf8a5db31180ddb0ad6b25c400a878cf62f5"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.616125 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 20 09:09:38 crc kubenswrapper[5115]: apiVersion: v1 Jan 20 09:09:38 crc kubenswrapper[5115]: clusters: Jan 20 09:09:38 crc kubenswrapper[5115]: - cluster: Jan 20 09:09:38 crc kubenswrapper[5115]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: server: https://api-int.crc.testing:6443 Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: contexts: Jan 20 09:09:38 crc kubenswrapper[5115]: - context: Jan 20 09:09:38 crc kubenswrapper[5115]: cluster: default-cluster Jan 20 09:09:38 crc kubenswrapper[5115]: namespace: default Jan 20 09:09:38 crc kubenswrapper[5115]: user: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: name: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: current-context: default-context Jan 20 09:09:38 crc kubenswrapper[5115]: kind: Config Jan 20 09:09:38 crc kubenswrapper[5115]: preferences: {} Jan 20 09:09:38 crc kubenswrapper[5115]: users: Jan 20 09:09:38 crc kubenswrapper[5115]: - name: default-auth Jan 20 09:09:38 crc kubenswrapper[5115]: user: Jan 20 09:09:38 crc kubenswrapper[5115]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 20 09:09:38 crc kubenswrapper[5115]: EOF Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9kn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-pnd9p_openshift-ovn-kubernetes(0b51ef97-33e0-4889-bd54-ac4be09c39e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.616378 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"a218a6ae2be1beddf47aeeeff4e3067dfd815b4aa565a272744c67c1c9c4e7f9"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.616494 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --disable-webhook \ Jan 20 09:09:38 crc kubenswrapper[5115]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel="${LOGLEVEL}" Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.617299 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" podUID="0b51ef97-33e0-4889-bd54-ac4be09c39e7" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.617682 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.617824 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.618706 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" event={"ID":"5976ec5f-b09c-4f83-802d-6042842fd8e6","Type":"ContainerStarted","Data":"25c305cb1240d273fa3c305da112ecc85a86bebd9283beff23df411927835bcf"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.619369 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.619668 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h55j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bmvv2_openshift-multus(4b42cc5a-50db-4588-8149-e758f33704ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.620026 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bht7q" event={"ID":"650d165f-75fb-4a16-a8fa-d8366b5f6eea","Type":"ContainerStarted","Data":"e5b425fcbf0f92c258bd50d42451ebe08ec6bd14f9d6b2c2df15cbbe24f22153"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.620323 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -euo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 20 09:09:38 crc kubenswrapper[5115]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 20 09:09:38 crc kubenswrapper[5115]: # As the secret mount is optional we must wait for the files to be present. Jan 20 09:09:38 crc kubenswrapper[5115]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 20 09:09:38 crc kubenswrapper[5115]: TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=0 Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs(){ Jan 20 09:09:38 crc kubenswrapper[5115]: CUR_TS=$(date +%s) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 20 09:09:38 crc kubenswrapper[5115]: HAS_LOGGED_INFO=1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: } Jan 20 09:09:38 crc kubenswrapper[5115]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 20 09:09:38 crc kubenswrapper[5115]: log_missing_certs Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 5 Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/kube-rbac-proxy \ Jan 20 09:09:38 crc kubenswrapper[5115]: --logtostderr \ Jan 20 09:09:38 crc kubenswrapper[5115]: --secure-listen-address=:9108 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 20 09:09:38 crc kubenswrapper[5115]: --upstream=http://127.0.0.1:29108/ \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-private-key-file=${TLS_PK} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --tls-cert-file=${TLS_CERT} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.620824 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" podUID="4b42cc5a-50db-4588-8149-e758f33704ef" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.621697 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tt8v" event={"ID":"92f344d4-34bc-4412-83c9-6b7beb45db64","Type":"ContainerStarted","Data":"761c02e36f1798649923e6cbb82508db2e808bd879684a78aa4fa13cfd46c504"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.622192 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -uo pipefail Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: HOSTS_FILE="/etc/hosts" Jan 20 09:09:38 crc kubenswrapper[5115]: TEMP_FILE="/tmp/hosts.tmp" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Make a temporary file with the old hosts file's attributes. Jan 20 09:09:38 crc kubenswrapper[5115]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Failed to preserve hosts file. Exiting." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: while true; do Jan 20 09:09:38 crc kubenswrapper[5115]: declare -A svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${services[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: # Fetch service IP from cluster dns if present. We make several tries Jan 20 09:09:38 crc kubenswrapper[5115]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 20 09:09:38 crc kubenswrapper[5115]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 20 09:09:38 crc kubenswrapper[5115]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 20 09:09:38 crc kubenswrapper[5115]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 20 09:09:38 crc kubenswrapper[5115]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 20 09:09:38 crc kubenswrapper[5115]: for i in ${!cmds[*]} Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: ips=($(eval "${cmds[i]}")) Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: svc_ips["${svc}"]="${ips[@]}" Jan 20 09:09:38 crc kubenswrapper[5115]: break Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Update /etc/hosts only if we get valid service IPs Jan 20 09:09:38 crc kubenswrapper[5115]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 20 09:09:38 crc kubenswrapper[5115]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 20 09:09:38 crc kubenswrapper[5115]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 20 09:09:38 crc kubenswrapper[5115]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Append resolver entries for services Jan 20 09:09:38 crc kubenswrapper[5115]: rc=0 Jan 20 09:09:38 crc kubenswrapper[5115]: for svc in "${!svc_ips[@]}"; do Jan 20 09:09:38 crc kubenswrapper[5115]: for ip in ${svc_ips[${svc}]}; do Jan 20 09:09:38 crc kubenswrapper[5115]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ $rc -ne 0 ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: continue Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 20 09:09:38 crc kubenswrapper[5115]: # Replace /etc/hosts with our modified version if needed Jan 20 09:09:38 crc kubenswrapper[5115]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 20 09:09:38 crc kubenswrapper[5115]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait Jan 20 09:09:38 crc kubenswrapper[5115]: unset svc_ips Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p9bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bht7q_openshift-dns(650d165f-75fb-4a16-a8fa-d8366b5f6eea): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.622374 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:38 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # This is needed so that converting clusters from GA to TP Jan 20 09:09:38 crc kubenswrapper[5115]: # will rollout control plane pods as well Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" != "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable multi-network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: # Enable admin network policy if configured (control-plane always full mode) Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag= Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: if [ "shared" == "shared" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode shared" Jan 20 09:09:38 crc kubenswrapper[5115]: elif [ "shared" == "local" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode local" Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: Jan 20 09:09:38 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/ovnkube \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:38 crc kubenswrapper[5115]: --init-cluster-manager "${K8S_NODE}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 20 09:09:38 crc kubenswrapper[5115]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-bind-address "127.0.0.1:29108" \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-pprof \ Jan 20 09:09:38 crc kubenswrapper[5115]: --metrics-enable-config-duration \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_join_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${dns_name_resolver_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${persistent_ips_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${network_segmentation_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${gateway_mode_flags} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${route_advertisements_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${preconfigured_udn_addresses_enable_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-ip=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-firewall=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-qos=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-egress-service=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multicast \ Jan 20 09:09:38 crc kubenswrapper[5115]: --enable-multi-external-gateway=true \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${multi_network_policy_enabled_flag} \ Jan 20 09:09:38 crc kubenswrapper[5115]: ${admin_network_policy_enabled_flag} Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.622924 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjql7" event={"ID":"f41177fd-db48-43c1-9a8d-69cad41d3fab","Type":"ContainerStarted","Data":"659f0ebcc7c90f8ab600f9b5cdedfe62387d5d2f5f114dc5c0d0a72e2046bbb2"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.623168 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.623327 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bht7q" podUID="650d165f-75fb-4a16-a8fa-d8366b5f6eea" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.623717 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c7548b475343c320509e713d055f6b58242bf38e80dabe0f83bc5f5b246e5948"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.623792 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 20 09:09:38 crc kubenswrapper[5115]: while [ true ]; Jan 20 09:09:38 crc kubenswrapper[5115]: do Jan 20 09:09:38 crc kubenswrapper[5115]: for f in $(ls /tmp/serviceca); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $f Jan 20 09:09:38 crc kubenswrapper[5115]: ca_file_path="/tmp/serviceca/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ -e "${reg_dir_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: mkdir $reg_dir_path Jan 20 09:09:38 crc kubenswrapper[5115]: cp $ca_file_path $reg_dir_path/ca.crt Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: for d in $(ls /etc/docker/certs.d); do Jan 20 09:09:38 crc kubenswrapper[5115]: echo $d Jan 20 09:09:38 crc kubenswrapper[5115]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 20 09:09:38 crc kubenswrapper[5115]: reg_conf_path="/tmp/serviceca/${dp}" Jan 20 09:09:38 crc kubenswrapper[5115]: if [ ! -e "${reg_conf_path}" ]; then Jan 20 09:09:38 crc kubenswrapper[5115]: rm -rf /etc/docker/certs.d/$d Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: sleep 60 & wait ${!} Jan 20 09:09:38 crc kubenswrapper[5115]: done Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwps7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-5tt8v_openshift-image-registry(92f344d4-34bc-4412-83c9-6b7beb45db64): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.623884 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podUID="5976ec5f-b09c-4f83-802d-6042842fd8e6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.624875 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-5tt8v" podUID="92f344d4-34bc-4412-83c9-6b7beb45db64" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.624924 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"29419067e362c04408ee6901ca499156e52be8d357dd0341693b338a5accc60c"} Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.624941 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:38 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:38 crc kubenswrapper[5115]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 20 09:09:38 crc kubenswrapper[5115]: source /etc/kubernetes/apiserver-url.env Jan 20 09:09:38 crc kubenswrapper[5115]: else Jan 20 09:09:38 crc kubenswrapper[5115]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 20 09:09:38 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:38 crc kubenswrapper[5115]: fi Jan 20 09:09:38 crc kubenswrapper[5115]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 20 09:09:38 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.625298 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:38 crc kubenswrapper[5115]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 20 09:09:38 crc kubenswrapper[5115]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 20 09:09:38 crc kubenswrapper[5115]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zmmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-xjql7_openshift-multus(f41177fd-db48-43c1-9a8d-69cad41d3fab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:38 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.626091 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.626464 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-xjql7" podUID="f41177fd-db48-43c1-9a8d-69cad41d3fab" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.626512 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.629036 5115 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g8mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-zvfcd_openshift-machine-config-operator(dc89765b-3b00-4f86-ae67-a5088c182918): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.630789 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.635769 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.643292 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.660953 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.676519 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.687161 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696324 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696373 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696388 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696409 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.696424 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.697672 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.706218 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.729270 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.759090 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.759223 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.759445 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.759422861 +0000 UTC m=+89.928201391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.759715 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760332 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760394 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760418 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.760948 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.7608841 +0000 UTC m=+89.929662670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.761470 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.761888 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.762014 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.7619906 +0000 UTC m=+89.930769170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.772524 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798412 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798452 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798476 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.798487 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.813632 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.854688 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.862523 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.862680 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.862771 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.862831 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.862793401 +0000 UTC m=+90.031571931 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.862981 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863032 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863074 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863101 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863115 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.863081929 +0000 UTC m=+90.031860499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:38 crc kubenswrapper[5115]: E0120 09:09:38.863197 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:39.863171701 +0000 UTC m=+90.031950361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.893049 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901487 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901557 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901584 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901650 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.901678 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:38Z","lastTransitionTime":"2026-01-20T09:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.938162 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:38 crc kubenswrapper[5115]: I0120 09:09:38.975503 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007108 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007193 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007226 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007265 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.007291 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.016568 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.051841 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.096604 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.109995 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110079 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110108 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110142 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.110170 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.130545 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.172128 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.212244 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213334 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213453 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213524 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.213551 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.256474 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.298173 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320128 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320243 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320395 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.320420 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.336242 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.372655 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.411260 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423453 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423557 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423582 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423616 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.423637 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.455840 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.494968 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526054 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526132 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526153 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526182 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.526201 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.535046 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.572139 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.626259 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630913 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630944 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630955 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.630978 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.654230 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.694304 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.730340 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733396 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733460 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733476 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733497 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.733512 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.770969 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.773783 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.773937 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.773991 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774200 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774236 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774255 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774335 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.774311628 +0000 UTC m=+91.943090198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774404 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774446 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.774434531 +0000 UTC m=+91.943213101 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774516 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.774556 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.774545304 +0000 UTC m=+91.943323864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.821368 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845618 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845707 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845734 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845761 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.845782 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.854062 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.875198 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875447 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.875411747 +0000 UTC m=+92.044190287 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.875525 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.875713 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875733 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875825 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.875800258 +0000 UTC m=+92.044578818 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875873 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875889 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875923 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: E0120 09:09:39.875972 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:41.875962702 +0000 UTC m=+92.044741242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.894504 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949484 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949553 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949571 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:39 crc kubenswrapper[5115]: I0120 09:09:39.949610 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:39Z","lastTransitionTime":"2026-01-20T09:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053543 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053565 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053603 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.053629 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094445 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094538 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094568 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094599 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.094621 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.110452 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115369 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115428 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115446 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.115491 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.130383 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141799 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141874 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141926 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.141981 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.142005 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.159855 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165163 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165191 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165224 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.165247 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.181607 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186138 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186192 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.186216 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.202890 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.203242 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205276 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205304 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205337 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.205360 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.216368 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.216567 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.216598 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.216950 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.217036 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.217083 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.217346 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:40 crc kubenswrapper[5115]: E0120 09:09:40.219302 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.224565 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.227055 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.230077 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.234868 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.235634 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.243229 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.248267 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.252157 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.253269 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.254444 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.257216 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.262032 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.264657 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.267005 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.269445 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.269500 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.272228 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.274060 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.275526 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.277072 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.282705 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.283120 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.290332 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.292976 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.295989 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.299408 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.301827 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.304281 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.306818 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307842 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307934 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307960 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.307988 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.308008 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.310260 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.313997 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.314173 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.319343 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.326706 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.329743 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.332014 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.334074 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.335348 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.336722 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.338023 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.338866 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.339920 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.340613 5115 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.340719 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.344357 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.345559 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.347052 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.347169 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.348513 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.349223 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.350603 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.351290 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.351801 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.352985 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.354027 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.355281 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.356368 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.357050 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.357730 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.359051 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.360318 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.361624 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.363792 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.364228 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.365231 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.367034 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.373368 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.401210 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411479 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411544 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411563 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411591 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.411611 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.421321 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.435454 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.449135 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.475512 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.496190 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514637 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514700 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514718 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514743 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.514761 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.538124 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.576861 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.612943 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618074 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618135 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618154 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618178 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.618196 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.652449 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721486 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721569 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721597 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.721653 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824064 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824144 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824171 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.824224 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926609 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926694 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926714 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926737 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:40 crc kubenswrapper[5115]: I0120 09:09:40.926749 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:40Z","lastTransitionTime":"2026-01-20T09:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030095 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030168 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030187 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.030233 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132790 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132931 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132967 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.132991 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236381 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236471 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236493 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236522 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.236542 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339014 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339105 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.339148 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442347 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442448 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442477 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442512 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.442539 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546111 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546210 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546280 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546308 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.546331 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648417 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648449 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648483 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.648505 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752080 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752160 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752187 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752219 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.752242 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.799434 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.799533 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799629 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799712 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799742 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.799715105 +0000 UTC m=+95.968493665 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.799629 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799800 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.799775437 +0000 UTC m=+95.968554007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799857 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799955 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.799985 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.800074 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.800047764 +0000 UTC m=+95.968826334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855682 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855800 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855828 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.855845 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.901272 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.901410 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.90139123 +0000 UTC m=+96.070169760 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.901530 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.901619 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.901759 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.901822 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.901813031 +0000 UTC m=+96.070591561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902185 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902199 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902210 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: E0120 09:09:41.902252 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:45.902243273 +0000 UTC m=+96.071021803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958557 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958652 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958670 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:41 crc kubenswrapper[5115]: I0120 09:09:41.958682 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:41Z","lastTransitionTime":"2026-01-20T09:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061223 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061248 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061262 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.061270 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164732 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164805 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164825 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164849 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.164868 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222259 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.222421 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222563 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222574 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.222833 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.222611 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.223072 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:42 crc kubenswrapper[5115]: E0120 09:09:42.223205 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.254477 5115 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268179 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268241 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268260 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268289 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.268309 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371678 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371749 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371766 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371790 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.371807 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474424 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474783 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474853 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.474943 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.475019 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578429 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578484 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578495 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578514 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.578528 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681286 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681308 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681339 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.681366 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784262 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784625 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784730 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784838 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.784972 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888123 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888189 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888207 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888232 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.888248 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991510 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991602 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991631 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991666 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:42 crc kubenswrapper[5115]: I0120 09:09:42.991691 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:42Z","lastTransitionTime":"2026-01-20T09:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094300 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094393 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094415 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094446 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.094466 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197540 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197615 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197638 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197664 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.197684 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301014 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301423 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301546 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301655 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.301772 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405074 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405397 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405489 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405573 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.405645 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508542 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508651 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508676 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508716 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.508742 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.611812 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612315 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612407 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.612595 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.715402 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.715854 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.716032 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.716239 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.716430 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819746 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819821 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819841 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819868 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.819920 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922780 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922862 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922878 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922912 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:43 crc kubenswrapper[5115]: I0120 09:09:43.922924 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:43Z","lastTransitionTime":"2026-01-20T09:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025176 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025223 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025232 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025244 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.025253 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128522 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128609 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128629 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128656 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.128680 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.216888 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.216937 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.217161 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.217943 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.218003 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.217227 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.218121 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:44 crc kubenswrapper[5115]: E0120 09:09:44.217833 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232455 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232580 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232605 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232629 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.232684 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335382 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335497 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335533 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.335556 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438666 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438762 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438789 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438819 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.438840 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.542513 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.542928 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.543045 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.543163 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.543257 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646648 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646721 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646746 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646779 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.646804 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751558 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751596 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751630 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.751656 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855046 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855150 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855179 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.855239 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958521 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958576 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958588 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958609 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:44 crc kubenswrapper[5115]: I0120 09:09:44.958623 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:44Z","lastTransitionTime":"2026-01-20T09:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061409 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061485 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061502 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061530 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.061547 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163638 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163685 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163698 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163718 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.163731 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265889 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265947 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265959 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265972 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.265981 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368519 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368578 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368588 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368603 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.368630 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471527 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471615 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471639 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.471659 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574542 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574654 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574673 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574692 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.574706 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677469 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677568 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677626 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.677648 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780429 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780560 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780580 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780607 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.780624 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.852849 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.853030 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.853127 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853154 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853306 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853348 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.853312655 +0000 UTC m=+104.022091225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853360 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853391 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853519 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.853468809 +0000 UTC m=+104.022247379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853319 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.853634 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.853613123 +0000 UTC m=+104.022391703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883548 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883634 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883654 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883680 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.883701 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.954789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.955111 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.955176 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955301 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.955215146 +0000 UTC m=+104.123993716 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955525 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955593 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955607 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955705 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955715 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.955685019 +0000 UTC m=+104.124463549 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:45 crc kubenswrapper[5115]: E0120 09:09:45.955968 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:09:53.955865394 +0000 UTC m=+104.124643964 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986753 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986844 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986856 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986876 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:45 crc kubenswrapper[5115]: I0120 09:09:45.986907 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:45Z","lastTransitionTime":"2026-01-20T09:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089729 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089789 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089799 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089820 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.089837 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192727 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192782 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192795 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192812 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.192824 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.216448 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.216492 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.216522 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.216645 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.216781 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.216876 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.217010 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:46 crc kubenswrapper[5115]: E0120 09:09:46.217240 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296028 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296101 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296119 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.296166 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399625 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399689 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399708 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399732 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.399750 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503014 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503133 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503153 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503181 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.503201 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.606888 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607028 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607055 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607088 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.607114 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710170 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710259 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710281 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710308 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.710328 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813112 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813193 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813219 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813249 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.813275 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916626 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916698 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916712 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916735 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:46 crc kubenswrapper[5115]: I0120 09:09:46.916748 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:46Z","lastTransitionTime":"2026-01-20T09:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.019981 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.020970 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.021090 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.021204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.021295 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124052 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124136 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124157 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124183 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.124200 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226463 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226513 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226525 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226540 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.226551 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328389 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328449 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.328492 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431002 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431071 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431090 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.431101 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534199 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534282 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534340 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.534364 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636777 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636863 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636888 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.636993 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.637379 5115 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740090 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740138 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740148 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740161 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.740171 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.842965 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843073 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843102 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843139 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.843164 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945676 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945728 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945739 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945758 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:47 crc kubenswrapper[5115]: I0120 09:09:47.945769 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:47Z","lastTransitionTime":"2026-01-20T09:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048748 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048820 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048836 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.048933 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151636 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151713 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151738 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151764 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.151782 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.216116 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.216183 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.216341 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.216517 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.216856 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.217111 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.217164 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:48 crc kubenswrapper[5115]: E0120 09:09:48.217375 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254763 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254840 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254859 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254887 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.254938 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.357937 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358022 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358077 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358104 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.358122 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460748 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460811 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460827 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460846 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.460863 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564226 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564305 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564348 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.564369 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.666973 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667056 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667075 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667099 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.667119 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770096 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770186 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770216 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.770237 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873222 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873297 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873316 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873343 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.873381 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976222 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976293 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976319 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976349 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:48 crc kubenswrapper[5115]: I0120 09:09:48.976372 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:48Z","lastTransitionTime":"2026-01-20T09:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079786 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079850 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079886 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.079938 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182760 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182862 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182926 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182968 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.182993 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285414 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285526 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285556 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285642 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.285671 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388447 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388535 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388561 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388594 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.388616 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491708 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491792 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491817 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491852 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.491875 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594371 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594394 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594422 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.594444 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697100 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697189 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697242 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.697267 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.800013 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.800553 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.800829 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.801078 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.801250 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.904670 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.905170 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.905398 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.905647 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:49 crc kubenswrapper[5115]: I0120 09:09:49.906026 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:49Z","lastTransitionTime":"2026-01-20T09:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009031 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009457 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009661 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.009848 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.010117 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.113266 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114003 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114040 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114066 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.114084 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216144 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216413 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216474 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.216574 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.216777 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.216987 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.217093 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.217260 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.218845 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.218922 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.218943 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.219015 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.219069 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.220547 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:50 crc kubenswrapper[5115]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 20 09:09:50 crc kubenswrapper[5115]: set -euo pipefail Jan 20 09:09:50 crc kubenswrapper[5115]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 20 09:09:50 crc kubenswrapper[5115]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 20 09:09:50 crc kubenswrapper[5115]: # As the secret mount is optional we must wait for the files to be present. Jan 20 09:09:50 crc kubenswrapper[5115]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 20 09:09:50 crc kubenswrapper[5115]: TS=$(date +%s) Jan 20 09:09:50 crc kubenswrapper[5115]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 20 09:09:50 crc kubenswrapper[5115]: HAS_LOGGED_INFO=0 Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: log_missing_certs(){ Jan 20 09:09:50 crc kubenswrapper[5115]: CUR_TS=$(date +%s) Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 20 09:09:50 crc kubenswrapper[5115]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 20 09:09:50 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 20 09:09:50 crc kubenswrapper[5115]: HAS_LOGGED_INFO=1 Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: } Jan 20 09:09:50 crc kubenswrapper[5115]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 20 09:09:50 crc kubenswrapper[5115]: log_missing_certs Jan 20 09:09:50 crc kubenswrapper[5115]: sleep 5 Jan 20 09:09:50 crc kubenswrapper[5115]: done Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 20 09:09:50 crc kubenswrapper[5115]: exec /usr/bin/kube-rbac-proxy \ Jan 20 09:09:50 crc kubenswrapper[5115]: --logtostderr \ Jan 20 09:09:50 crc kubenswrapper[5115]: --secure-listen-address=:9108 \ Jan 20 09:09:50 crc kubenswrapper[5115]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 20 09:09:50 crc kubenswrapper[5115]: --upstream=http://127.0.0.1:29108/ \ Jan 20 09:09:50 crc kubenswrapper[5115]: --tls-private-key-file=${TLS_PK} \ Jan 20 09:09:50 crc kubenswrapper[5115]: --tls-cert-file=${TLS_CERT} Jan 20 09:09:50 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:50 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.224610 5115 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 20 09:09:50 crc kubenswrapper[5115]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ -f "/env/_master" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: set -o allexport Jan 20 09:09:50 crc kubenswrapper[5115]: source "/env/_master" Jan 20 09:09:50 crc kubenswrapper[5115]: set +o allexport Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "" != "" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: # This is needed so that converting clusters from GA to TP Jan 20 09:09:50 crc kubenswrapper[5115]: # will rollout control plane pods as well Jan 20 09:09:50 crc kubenswrapper[5115]: network_segmentation_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" != "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_enabled_flag="--enable-multi-network" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: route_advertisements_enable_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: # Enable multi-network policy if configured (control-plane always full mode) Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_policy_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "false" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: # Enable admin network policy if configured (control-plane always full mode) Jan 20 09:09:50 crc kubenswrapper[5115]: admin_network_policy_enabled_flag= Jan 20 09:09:50 crc kubenswrapper[5115]: if [[ "true" == "true" ]]; then Jan 20 09:09:50 crc kubenswrapper[5115]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: if [ "shared" == "shared" ]; then Jan 20 09:09:50 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode shared" Jan 20 09:09:50 crc kubenswrapper[5115]: elif [ "shared" == "local" ]; then Jan 20 09:09:50 crc kubenswrapper[5115]: gateway_mode_flags="--gateway-mode local" Jan 20 09:09:50 crc kubenswrapper[5115]: else Jan 20 09:09:50 crc kubenswrapper[5115]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 20 09:09:50 crc kubenswrapper[5115]: exit 1 Jan 20 09:09:50 crc kubenswrapper[5115]: fi Jan 20 09:09:50 crc kubenswrapper[5115]: Jan 20 09:09:50 crc kubenswrapper[5115]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 20 09:09:50 crc kubenswrapper[5115]: exec /usr/bin/ovnkube \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-interconnect \ Jan 20 09:09:50 crc kubenswrapper[5115]: --init-cluster-manager "${K8S_NODE}" \ Jan 20 09:09:50 crc kubenswrapper[5115]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 20 09:09:50 crc kubenswrapper[5115]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 20 09:09:50 crc kubenswrapper[5115]: --metrics-bind-address "127.0.0.1:29108" \ Jan 20 09:09:50 crc kubenswrapper[5115]: --metrics-enable-pprof \ Jan 20 09:09:50 crc kubenswrapper[5115]: --metrics-enable-config-duration \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v4_join_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v6_join_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${dns_name_resolver_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${persistent_ips_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${multi_network_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${network_segmentation_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${gateway_mode_flags} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${route_advertisements_enable_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${preconfigured_udn_addresses_enable_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-ip=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-firewall=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-qos=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-egress-service=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-multicast \ Jan 20 09:09:50 crc kubenswrapper[5115]: --enable-multi-external-gateway=true \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${multi_network_policy_enabled_flag} \ Jan 20 09:09:50 crc kubenswrapper[5115]: ${admin_network_policy_enabled_flag} Jan 20 09:09:50 crc kubenswrapper[5115]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-sfqm7_openshift-ovn-kubernetes(5976ec5f-b09c-4f83-802d-6042842fd8e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 20 09:09:50 crc kubenswrapper[5115]: > logger="UnhandledError" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.227282 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podUID="5976ec5f-b09c-4f83-802d-6042842fd8e6" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.233457 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.250848 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.262098 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.290384 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.306111 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.318194 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.321934 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.321969 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.322006 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.322020 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.322029 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.334142 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.344674 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.355828 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356156 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356310 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356480 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.356614 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.372470 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.373467 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378563 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378635 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378658 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378686 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.378706 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.394856 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.400308 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412577 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412635 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412654 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412678 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.412696 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.428586 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.433433 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.447355 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.447958 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448024 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448041 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448064 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.448079 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.469732 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473837 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473874 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473905 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.473916 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.474423 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.483906 5115 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3c68733-f696-46f4-a86e-b22c133b82e3\\\",\\\"systemUUID\\\":\\\"4e7ead0d-ccd6-45dd-b671-f46e59803438\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: E0120 09:09:50.484023 5115 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485437 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485468 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485477 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485490 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.485499 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.486018 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.499107 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.511332 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.519118 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.522396 5115 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.527822 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.539075 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589189 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589286 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589313 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.589369 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692137 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692200 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692220 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692246 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.692264 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794675 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794767 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794789 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794818 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.794847 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897762 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897843 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897858 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897885 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:50 crc kubenswrapper[5115]: I0120 09:09:50.897945 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:50Z","lastTransitionTime":"2026-01-20T09:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000463 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000523 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000535 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000560 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.000572 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.102963 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103035 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103049 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103072 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.103089 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206439 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206501 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206510 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206532 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.206545 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311004 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311049 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311061 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311080 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.311091 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413428 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413500 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413511 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413532 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.413546 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515717 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515771 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515785 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515804 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.515817 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618253 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618304 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618317 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618526 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.618559 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.672366 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bht7q" event={"ID":"650d165f-75fb-4a16-a8fa-d8366b5f6eea","Type":"ContainerStarted","Data":"8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.678268 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5tt8v" event={"ID":"92f344d4-34bc-4412-83c9-6b7beb45db64","Type":"ContainerStarted","Data":"ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.681757 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.686564 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.698158 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.711005 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.721645 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722358 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722423 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722447 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722515 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.722536 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.755945 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.772001 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.785539 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.798476 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.807326 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.820169 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825201 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825279 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825299 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825328 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.825346 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.836841 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.848042 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.861218 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.871389 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.883714 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.893366 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.904259 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.913201 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928230 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928248 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928269 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.928288 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:51Z","lastTransitionTime":"2026-01-20T09:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.934430 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.948008 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.959526 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.973021 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:51 crc kubenswrapper[5115]: I0120 09:09:51.985148 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.001058 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.011919 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031145 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031155 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031172 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.031181 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.046311 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.058564 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.071208 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.085833 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.098785 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.129416 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134617 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134726 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134779 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134810 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.134826 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.146148 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.162441 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.175641 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.189269 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.205787 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.216685 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.216791 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.216722 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.216972 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.217107 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.217360 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.218010 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:52 crc kubenswrapper[5115]: E0120 09:09:52.218360 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.223585 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238674 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238787 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238810 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238841 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.238860 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.243507 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342662 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342736 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342754 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342780 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.342802 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446539 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446604 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446659 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.446682 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550003 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550057 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550068 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550085 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.550094 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653398 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653462 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653476 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653503 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.653520 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.687223 5115 generic.go:358] "Generic (PLEG): container finished" podID="0b51ef97-33e0-4889-bd54-ac4be09c39e7" containerID="7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286" exitCode=0 Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.687344 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerDied","Data":"7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.689882 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"0cb99b9960631ec0d3f80adf4b325d73a90bdebbe453648f57cffc26e11a89e8"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.689960 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.714365 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.726124 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.738759 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.753751 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756235 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756286 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756297 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756314 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.756325 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.767271 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.779798 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.793766 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.805633 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.816235 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.826592 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.840630 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.851574 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858489 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858548 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858561 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858581 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.858595 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.864274 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.872685 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.893341 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.906354 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.920773 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.932632 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.941848 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.950877 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961232 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.961244 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:52Z","lastTransitionTime":"2026-01-20T09:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.963986 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.975574 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.987179 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:52 crc kubenswrapper[5115]: I0120 09:09:52.998917 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.025961 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.040695 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.054970 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064554 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064607 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064618 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064638 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.064652 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.065815 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.097387 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.130843 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.145290 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.159415 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166910 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166959 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166970 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166986 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.166998 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.170436 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0cb99b9960631ec0d3f80adf4b325d73a90bdebbe453648f57cffc26e11a89e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.179024 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.192924 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.207250 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.216918 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.217175 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.218274 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.232929 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270854 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270942 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270958 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270981 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.270997 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.373977 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374016 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374025 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374041 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.374050 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476818 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476880 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476929 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476952 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.476964 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579499 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579564 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579579 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579604 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.579620 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.682764 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683193 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683213 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683234 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.683247 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.694222 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xjql7" event={"ID":"f41177fd-db48-43c1-9a8d-69cad41d3fab","Type":"ContainerStarted","Data":"a865a33344a91fb61ba891497bd1d13a6849531c298102a1405e220a44d2933e"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.696133 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"524bafe9b9fb2826c32ba260baaac1dd3bdacd715c281152d61af32d4919eba0"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.696226 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"4fe1f9bd2203e20099f4a6f3c4a22df44a05e962178d45b9a0fa66ab33395af9"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.710600 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.723922 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xjql7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f41177fd-db48-43c1-9a8d-69cad41d3fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://a865a33344a91fb61ba891497bd1d13a6849531c298102a1405e220a44d2933e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zmmw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xjql7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.733815 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5tt8v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f344d4-34bc-4412-83c9-6b7beb45db64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebbc93aa8ffe71c586af90a1ae797c4ebc8c5f3006d2f2cd16fe20b169f230b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwps7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5tt8v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.756334 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69226b59-0946-40c7-a9a3-38368638de30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://3438785036ee5cce0cfb7ef5015765de9e91020a660f22067f83fe7088f6983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7cf2bf860f3578cf077c66e64feccdb0f4aa9b087c452b75e9089435dbe938ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f147340eaa8ad9365db74bb82cf821ebd6579e31407e87af1956220ccf9907a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c3c4ab2513a300c9031279fe7c4f932126d69745f336cee3a8adcd6cd8bd0cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://92465a413675efac7faed27b64279954bdfa6292127a177c3bff862358a9a025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f10288ab59089753121af19b0f7ad453fc7b7a50d66e5d429b1f5e46962dbc35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7eb1250e743174b8367f49423c7ae31f24a26c0ce04d9cf83c7747f12e1363c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0f6ae1955cc25375e7a0bdbfb67efdcbb6a702b9faf54d40a444392f0e7c151\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.767833 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b55ff7536dd9e8b83a738f5d6e23ff8882a27e30ba3ce9d545ea86cb80d7e1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.779169 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.797551 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc89765b-3b00-4f86-ae67-a5088c182918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0cb99b9960631ec0d3f80adf4b325d73a90bdebbe453648f57cffc26e11a89e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7g8mg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zvfcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799263 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799310 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799323 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.799355 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.809919 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bht7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"650d165f-75fb-4a16-a8fa-d8366b5f6eea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://8a493b21b70e5ca7478414f87f98ad6276550fe379c53f2a7de532436a079af9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:09:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p9bt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bht7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.826027 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b42cc5a-50db-4588-8149-e758f33704ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h55j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bmvv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.840873 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5125ab95-d5cf-48ad-a899-3add343eaeba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T09:09:22Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0120 09:09:21.702814 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0120 09:09:21.703031 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0120 09:09:21.704002 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4007456761/tls.crt::/tmp/serving-cert-4007456761/tls.key\\\\\\\"\\\\nI0120 09:09:22.179437 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 09:09:22.181269 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 09:09:22.181287 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 09:09:22.181316 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 09:09:22.181321 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 09:09:22.184781 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 09:09:22.184834 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 09:09:22.184845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 09:09:22.184848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 09:09:22.184851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 09:09:22.184854 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0120 09:09:22.185244 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0120 09:09:22.186562 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.852695 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.863423 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5976ec5f-b09c-4f83-802d-6042842fd8e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tt9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-sfqm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.872694 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25383c7b-b61c-48bd-b099-c7c8f90c6c1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f93bd1c4ac75f0c99554549eefe09dda170f1b0afebc9787b7fd0a0494295d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4e50c289ad8aa409f7e54320df8a017c15f3b8c608266e408fe79c4155cbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.888638 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c36dad2-2b5f-476d-ae16-db72a8a479e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9cba2d9418782f2aa23b490fca45506e8a44b0f733ce30c248299532a7c06d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2d7f893e43011292fd2dc960e3f3f89c2af1830eace24fdafba43340a362e1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6c00207af01190039121d0127e5a029446b01758e672d57fe7d8c31b546a00d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.900477 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74386c11-427f-467a-bfa5-799093f908c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62aeee29713cf7b320e1bbf81544cbd80fb6575f67080fb534f54cbf1267a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://568bfe79c3828aa5c26a80f41e7507eaa2342c0c17fb8d4b2e330a163c96af56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://81b0daa998eef062af8f4d4bb257256cfa372aed58e0bbba4e167bbfa574acd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T09:08:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70cfe00add4342ecc6915b829d12c1c4a9476005e6a1442944be45056180b3f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:08:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:08:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902687 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902772 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902792 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902814 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.902859 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:53Z","lastTransitionTime":"2026-01-20T09:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.916042 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.924921 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtcxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tzrjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.947735 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b51ef97-33e0-4889-bd54-ac4be09c39e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a7ed1933ad1c3e8e4846138b7c25f0e01b03dbae5680684a35133c923073286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T09:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T09:09:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9kn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T09:09:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pnd9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.952920 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.952960 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.952988 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953067 5115 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953113 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.953102046 +0000 UTC m=+120.121880576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953194 5115 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953352 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.953314672 +0000 UTC m=+120.122093192 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953367 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953402 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953417 5115 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:53 crc kubenswrapper[5115]: E0120 09:09:53.953459 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.953449555 +0000 UTC m=+120.122228565 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:53 crc kubenswrapper[5115]: I0120 09:09:53.960616 5115 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T09:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005608 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005655 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005666 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005683 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.005697 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.053843 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054118 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.054077802 +0000 UTC m=+120.222856342 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.054236 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.054341 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054501 5115 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054578 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs podName:3d8f5093-1a2e-4c32-8c74-b6cfb185cc99 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.054560276 +0000 UTC m=+120.223338806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs") pod "network-metrics-daemon-tzrjx" (UID: "3d8f5093-1a2e-4c32-8c74-b6cfb185cc99") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054499 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054632 5115 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054646 5115 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.054720 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.054701029 +0000 UTC m=+120.223479559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108623 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108690 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108702 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108719 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.108731 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.211806 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212369 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212389 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212415 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.212430 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.216256 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.216288 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.216393 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.216404 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.216496 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.216710 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.217598 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:54 crc kubenswrapper[5115]: E0120 09:09:54.217747 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314384 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314438 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314449 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314463 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.314473 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428197 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428256 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428272 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428292 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.428305 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531196 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531208 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531224 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.531256 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633481 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633565 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633577 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633622 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.633631 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.702543 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"42de74bd48899fc57520fc4e45923690712aec29576a30790a2275dad3b7e5f9"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.702626 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4086c3ea2d85e4b296e8536fac149813e0d785aca75891f55621eeb44af23813"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.704265 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="1c01d6e379df67f685800890a1c7d12280aee6039416a2bf9a5ef2225e972142" exitCode=0 Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.704378 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"1c01d6e379df67f685800890a1c7d12280aee6039416a2bf9a5ef2225e972142"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710276 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"e27e5e9fbb542a35e148c108a51be897d3bad20213ec443e846c659fd47daab6"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710326 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"3e2ae5d7fdc947424efda094b0ac4baf576f59e1e70b2b229386f40b16262dbb"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710344 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"78337bb7c60f2c9302a636e3343c0c887f813ab04815aef94f1ce3af7d9061d2"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.710362 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"a30d3012d5497a1a5c437ee4a4e23ed164c589507f56546cf0ae81558d2146cb"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742155 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742215 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742233 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742257 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.742274 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.796715 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.796678613 podStartE2EDuration="17.796678613s" podCreationTimestamp="2026-01-20 09:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.773401029 +0000 UTC m=+104.942179569" watchObservedRunningTime="2026-01-20 09:09:54.796678613 +0000 UTC m=+104.965457193" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.818071 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=17.818043985 podStartE2EDuration="17.818043985s" podCreationTimestamp="2026-01-20 09:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.796600331 +0000 UTC m=+104.965378891" watchObservedRunningTime="2026-01-20 09:09:54.818043985 +0000 UTC m=+104.986822525" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.818252 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.818246041 podStartE2EDuration="17.818246041s" podCreationTimestamp="2026-01-20 09:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.818193889 +0000 UTC m=+104.986972469" watchObservedRunningTime="2026-01-20 09:09:54.818246041 +0000 UTC m=+104.987024581" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846393 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846407 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846426 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.846439 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.924202 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xjql7" podStartSLOduration=83.92417931 podStartE2EDuration="1m23.92417931s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.923829031 +0000 UTC m=+105.092607611" watchObservedRunningTime="2026-01-20 09:09:54.92417931 +0000 UTC m=+105.092957830" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949216 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949265 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949275 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949288 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.949299 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:54Z","lastTransitionTime":"2026-01-20T09:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.969237 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5tt8v" podStartSLOduration=83.969221887 podStartE2EDuration="1m23.969221887s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.938577886 +0000 UTC m=+105.107356406" watchObservedRunningTime="2026-01-20 09:09:54.969221887 +0000 UTC m=+105.138000407" Jan 20 09:09:54 crc kubenswrapper[5115]: I0120 09:09:54.969437 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.969432542 podStartE2EDuration="16.969432542s" podCreationTimestamp="2026-01-20 09:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:54.968860147 +0000 UTC m=+105.137638667" watchObservedRunningTime="2026-01-20 09:09:54.969432542 +0000 UTC m=+105.138211062" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.015140 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podStartSLOduration=84.014927702 podStartE2EDuration="1m24.014927702s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:55.014744786 +0000 UTC m=+105.183523316" watchObservedRunningTime="2026-01-20 09:09:55.014927702 +0000 UTC m=+105.183706272" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.028327 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bht7q" podStartSLOduration=84.02830191 podStartE2EDuration="1m24.02830191s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:55.027047946 +0000 UTC m=+105.195826506" watchObservedRunningTime="2026-01-20 09:09:55.02830191 +0000 UTC m=+105.197080480" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.053768 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054257 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054350 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054443 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.054525 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156258 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156555 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156679 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156760 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.156826 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.259776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260284 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260304 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260334 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.260354 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363431 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363536 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363565 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363600 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.363626 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466073 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466143 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466162 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466187 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.466207 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569294 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569368 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569412 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569447 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.569472 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671820 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671929 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671957 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.671990 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.672013 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.718236 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="93e19dc8e1e75dbba4d59a1fe5d94c21410eba7cde11cc778bff185c983d2dde" exitCode=0 Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.718356 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"93e19dc8e1e75dbba4d59a1fe5d94c21410eba7cde11cc778bff185c983d2dde"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777230 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777291 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777312 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777332 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.777346 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880397 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880458 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880471 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880492 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.880508 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.983924 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.983972 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.984000 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.984023 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:55 crc kubenswrapper[5115]: I0120 09:09:55.984035 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:55Z","lastTransitionTime":"2026-01-20T09:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086512 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086568 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086587 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086610 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.086629 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189466 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189575 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189632 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189661 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.189681 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.216683 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.216683 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.216821 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.217000 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.217047 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.217434 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.217063 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:56 crc kubenswrapper[5115]: E0120 09:09:56.217745 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292238 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292294 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292313 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.292354 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395550 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395596 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395616 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395642 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.395659 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498407 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498475 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498493 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498517 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.498535 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.600986 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601062 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601085 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601113 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.601133 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703707 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703788 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703809 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703833 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.703852 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.724321 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"a93ec71993e4c56239bbf76149ff10cda2f8e68e538501dda45a8338b48de997"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.727801 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="23638954be69df0286611e0aaf546639ef982b5aeb0d53cb2de8d34c8a7ed899" exitCode=0 Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.727881 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"23638954be69df0286611e0aaf546639ef982b5aeb0d53cb2de8d34c8a7ed899"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.737769 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"c7cecfc3dfcd46299a42d88a01cb68349ccf193c1d236e51c02d572d961be382"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811114 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811173 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811198 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811229 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.811252 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914598 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914752 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914777 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914802 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:56 crc kubenswrapper[5115]: I0120 09:09:56.914851 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:56Z","lastTransitionTime":"2026-01-20T09:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017628 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017719 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017741 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017770 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.017788 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120650 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120700 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120711 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120726 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.120737 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224153 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224223 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224241 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224270 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.224293 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326242 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326307 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326329 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326354 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.326376 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428685 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428757 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428776 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428805 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.428829 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531790 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531880 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531931 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531959 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.531976 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635015 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635107 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635130 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635161 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.635185 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738283 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738367 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738394 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738427 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.738455 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.745683 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="1a13633dc4f230ffbd25769764224ca0d8e8fb1608692912319b0741bae6f275" exitCode=0 Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.745811 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"1a13633dc4f230ffbd25769764224ca0d8e8fb1608692912319b0741bae6f275"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841799 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841876 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841929 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841957 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.841976 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944400 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944455 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944496 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:57 crc kubenswrapper[5115]: I0120 09:09:57.944513 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:57Z","lastTransitionTime":"2026-01-20T09:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047485 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047552 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047570 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047589 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.047604 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150614 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150704 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150717 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150736 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.150753 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216546 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216550 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.216729 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216760 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.216807 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.216956 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.217060 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:09:58 crc kubenswrapper[5115]: E0120 09:09:58.217126 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253212 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253250 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253259 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253273 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.253283 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355483 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355558 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355578 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355605 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.355623 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458231 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458306 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458331 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458360 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.458383 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561093 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561157 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561177 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561204 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.561222 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.664757 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665314 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665341 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665372 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.665395 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.753101 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="7146d1371e225c859e189baf0ecb8196a4c61a5eb99820fa325e1ffbd66a1630" exitCode=0 Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.753190 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"7146d1371e225c859e189baf0ecb8196a4c61a5eb99820fa325e1ffbd66a1630"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.761954 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" event={"ID":"0b51ef97-33e0-4889-bd54-ac4be09c39e7","Type":"ContainerStarted","Data":"f6b7022d24953d48ed1163c056d62ecaac06c48fcd940ff10ada258fd284089a"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767657 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767731 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767751 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767781 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.767807 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871294 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871359 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871379 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871403 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.871420 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975164 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975290 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975310 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975336 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:58 crc kubenswrapper[5115]: I0120 09:09:58.975354 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:58Z","lastTransitionTime":"2026-01-20T09:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.070278 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.070311 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078470 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078544 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078566 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078595 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.078614 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.122485 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180739 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180794 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180805 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180823 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.180836 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.197772 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" podStartSLOduration=88.197750514 podStartE2EDuration="1m28.197750514s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:09:59.144017824 +0000 UTC m=+109.312796354" watchObservedRunningTime="2026-01-20 09:09:59.197750514 +0000 UTC m=+109.366529044" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283360 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283441 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283467 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283499 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.283522 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386540 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386615 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386630 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386676 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.386692 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489190 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489258 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489276 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489300 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.489319 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592025 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592198 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592219 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592245 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.592992 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696042 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696105 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696116 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696131 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.696141 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.783737 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerStarted","Data":"a098e56dbbea5ca4409d69f99b7da39ee28e4043e7bc403eb6e1447175c69045"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.784688 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798175 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798220 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798231 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798246 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.798261 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.824127 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.900953 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901334 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901343 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901357 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:09:59 crc kubenswrapper[5115]: I0120 09:09:59.901367 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:09:59Z","lastTransitionTime":"2026-01-20T09:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004086 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004134 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004144 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004158 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.004168 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106753 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106809 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106822 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106843 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.106855 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209338 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209395 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209413 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209440 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.209461 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.220782 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.220966 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.221013 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221074 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221116 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.221164 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221257 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:10:00 crc kubenswrapper[5115]: E0120 09:10:00.221241 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311693 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311732 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311743 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311757 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.311766 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414202 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414269 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414287 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414312 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.414330 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517612 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517755 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517780 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517813 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.517831 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545728 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545811 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545837 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545864 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.545882 5115 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T09:10:00Z","lastTransitionTime":"2026-01-20T09:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.614217 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g"] Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.618293 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622037 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622245 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622133 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.622770 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.744990 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b38190-bc9a-4748-b5b6-58629c825842-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745131 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16b38190-bc9a-4748-b5b6-58629c825842-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745246 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16b38190-bc9a-4748-b5b6-58629c825842-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745312 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.745426 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.792510 5115 generic.go:358] "Generic (PLEG): container finished" podID="4b42cc5a-50db-4588-8149-e758f33704ef" containerID="a098e56dbbea5ca4409d69f99b7da39ee28e4043e7bc403eb6e1447175c69045" exitCode=0 Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.792619 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerDied","Data":"a098e56dbbea5ca4409d69f99b7da39ee28e4043e7bc403eb6e1447175c69045"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.792720 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" event={"ID":"4b42cc5a-50db-4588-8149-e758f33704ef","Type":"ContainerStarted","Data":"0d144c564cfd9c4e5c2b3e6a6e8aec9fd0bf91968d03c1108dace4a16ebb1542"} Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847445 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847613 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847638 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847782 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b38190-bc9a-4748-b5b6-58629c825842-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847795 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16b38190-bc9a-4748-b5b6-58629c825842-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847868 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16b38190-bc9a-4748-b5b6-58629c825842-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.847984 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16b38190-bc9a-4748-b5b6-58629c825842-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.849600 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16b38190-bc9a-4748-b5b6-58629c825842-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.867553 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16b38190-bc9a-4748-b5b6-58629c825842-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.878120 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16b38190-bc9a-4748-b5b6-58629c825842-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-jpc2g\" (UID: \"16b38190-bc9a-4748-b5b6-58629c825842\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: I0120 09:10:00.939015 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" Jan 20 09:10:00 crc kubenswrapper[5115]: W0120 09:10:00.964085 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b38190_bc9a_4748_b5b6_58629c825842.slice/crio-60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a WatchSource:0}: Error finding container 60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a: Status 404 returned error can't find the container with id 60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.206625 5115 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.219297 5115 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.797189 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" event={"ID":"5976ec5f-b09c-4f83-802d-6042842fd8e6","Type":"ContainerStarted","Data":"25556cd52edb7e5bee63322ae43421b7d2f5eb1221d6ec086899b092f9060931"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.797533 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" event={"ID":"5976ec5f-b09c-4f83-802d-6042842fd8e6","Type":"ContainerStarted","Data":"0c87f2a3b1054bd63ba4b0c7f603ff4c686d5a70069129f3faeb23682d7b2e1e"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.799414 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" event={"ID":"16b38190-bc9a-4748-b5b6-58629c825842","Type":"ContainerStarted","Data":"8ac0cec58a9ec028f90b173038747a513529d2e879c00c45c86f856848377713"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.799488 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" event={"ID":"16b38190-bc9a-4748-b5b6-58629c825842","Type":"ContainerStarted","Data":"60d86305984877e67216bab82d059734cd9d1c8e2f26d5361b49baae41a05a8a"} Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.825807 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bmvv2" podStartSLOduration=90.825790671 podStartE2EDuration="1m30.825790671s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:00.821452467 +0000 UTC m=+110.990231037" watchObservedRunningTime="2026-01-20 09:10:01.825790671 +0000 UTC m=+111.994569201" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.826074 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-sfqm7" podStartSLOduration=90.826068629 podStartE2EDuration="1m30.826068629s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:01.823956683 +0000 UTC m=+111.992735213" watchObservedRunningTime="2026-01-20 09:10:01.826068629 +0000 UTC m=+111.994847159" Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.827780 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tzrjx"] Jan 20 09:10:01 crc kubenswrapper[5115]: I0120 09:10:01.827992 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:01 crc kubenswrapper[5115]: E0120 09:10:01.828101 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:02 crc kubenswrapper[5115]: I0120 09:10:02.216187 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:02 crc kubenswrapper[5115]: E0120 09:10:02.216315 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:10:02 crc kubenswrapper[5115]: I0120 09:10:02.216328 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:02 crc kubenswrapper[5115]: I0120 09:10:02.216367 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:02 crc kubenswrapper[5115]: E0120 09:10:02.216434 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:10:02 crc kubenswrapper[5115]: E0120 09:10:02.216674 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:10:03 crc kubenswrapper[5115]: I0120 09:10:03.216732 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:03 crc kubenswrapper[5115]: E0120 09:10:03.216878 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.216577 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.216649 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:04 crc kubenswrapper[5115]: E0120 09:10:04.216746 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.216798 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:04 crc kubenswrapper[5115]: E0120 09:10:04.217156 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 20 09:10:04 crc kubenswrapper[5115]: E0120 09:10:04.218353 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 20 09:10:04 crc kubenswrapper[5115]: I0120 09:10:04.218777 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.217043 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:05 crc kubenswrapper[5115]: E0120 09:10:05.217669 5115 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tzrjx" podUID="3d8f5093-1a2e-4c32-8c74-b6cfb185cc99" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.833942 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.836631 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53"} Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.837412 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.866431 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-jpc2g" podStartSLOduration=94.866410634 podStartE2EDuration="1m34.866410634s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:01.846424425 +0000 UTC m=+112.015202995" watchObservedRunningTime="2026-01-20 09:10:05.866410634 +0000 UTC m=+116.035189184" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.866929 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.866922567 podStartE2EDuration="27.866922567s" podCreationTimestamp="2026-01-20 09:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:05.86624821 +0000 UTC m=+116.035026780" watchObservedRunningTime="2026-01-20 09:10:05.866922567 +0000 UTC m=+116.035701107" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.875644 5115 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.875888 5115 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 20 09:10:05 crc kubenswrapper[5115]: I0120 09:10:05.917373 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-2vzsk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.106846 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xn6qp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.107051 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.109969 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.110099 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.112532 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.112603 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113728 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113908 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113931 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.113949 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114009 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114159 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114609 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.114837 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.116262 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.118439 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s5mfg"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.119138 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.120714 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.122986 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.123222 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.123622 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.125447 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.126860 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.127368 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.127676 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.127746 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.128554 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-78z8z"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.128805 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.128865 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131248 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131679 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131697 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131748 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.131870 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132125 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132171 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132320 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132410 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132613 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132654 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132771 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.132965 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133270 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133414 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.133668 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134100 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134367 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134555 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134612 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134661 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134763 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.134983 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.140397 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-glkw9"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.140821 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.144610 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.147108 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.147724 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.150109 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.151576 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.162083 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.162664 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.162768 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.163260 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.163694 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.163850 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164049 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164457 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164594 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164597 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.164953 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.166376 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.166748 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.166169 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.170480 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-ljj2s"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.170778 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.172364 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.173095 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.173336 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.175039 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.178818 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.179175 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.179677 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.179880 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.180114 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.180424 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.181135 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.181501 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.184387 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.184609 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.184926 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185086 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185274 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185393 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185666 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185710 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185845 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185875 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.185982 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186000 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186113 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186147 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.186243 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.187007 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.187576 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.190022 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.194420 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.194767 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.200042 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.201003 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.206850 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.207598 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.208241 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.209196 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.216226 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.217301 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.217544 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220026 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220134 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220597 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220660 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-client\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220703 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qph7v\" (UniqueName: \"kubernetes.io/projected/72f63421-cfe9-45f8-85fe-b779a81a7ebb-kube-api-access-qph7v\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220736 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.220995 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6f108d0-ed4b-4318-bd96-7de2824bf73e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221068 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221116 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit-dir\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221160 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5bxk\" (UniqueName: \"kubernetes.io/projected/603cfb78-063c-444d-8434-38e8ff6b5f70-kube-api-access-d5bxk\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221238 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221280 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-images\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221315 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221348 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-config\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221382 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-encryption-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221429 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsw2\" (UniqueName: \"kubernetes.io/projected/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-kube-api-access-cnsw2\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221492 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221577 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-serving-cert\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221654 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221691 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-serving-cert\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.221752 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-client\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222123 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-dir\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222160 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-console-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222184 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222324 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222360 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222532 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-oauth-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222573 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-config\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222624 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222660 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222774 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g768j\" (UniqueName: \"kubernetes.io/projected/3b28944b-12d3-4087-b906-99fbf2937724-kube-api-access-g768j\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222808 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrc9m\" (UniqueName: \"kubernetes.io/projected/c6f108d0-ed4b-4318-bd96-7de2824bf73e-kube-api-access-rrc9m\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222842 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mq5\" (UniqueName: \"kubernetes.io/projected/9aa837bd-63fc-4bb8-b158-d8632117a117-kube-api-access-k4mq5\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222873 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b28944b-12d3-4087-b906-99fbf2937724-available-featuregates\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222941 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.223520 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.224283 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.224604 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.222972 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228377 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228421 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228461 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b28944b-12d3-4087-b906-99fbf2937724-serving-cert\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228521 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228574 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228596 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-oauth-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228634 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228661 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksglj\" (UniqueName: \"kubernetes.io/projected/0386fc07-a367-4188-8fab-3ce5d14ad6f2-kube-api-access-ksglj\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228693 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-encryption-config\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228719 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-config\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228747 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-policies\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228770 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228791 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-image-import-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228806 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-serving-ca\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228831 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228848 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228864 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfb78-063c-444d-8434-38e8ff6b5f70-serving-cert\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228887 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228942 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-trusted-ca-bundle\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228972 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.228992 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-service-ca\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.230070 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.230206 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.230243 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.233649 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.234535 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.238458 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.243967 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.244357 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.244547 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.245057 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.245439 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.247038 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.249264 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.249493 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.249925 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.250321 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.251839 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.252149 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.254500 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.254634 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.257268 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.257476 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.260106 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.260209 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.265097 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.265239 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.268032 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-8622t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.268170 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.270845 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-n9hxc"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.270975 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.276360 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.276825 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.279728 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.280006 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.285403 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.291104 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-2vzsk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.291137 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ztcgs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.291617 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.292006 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.292385 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.296753 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.296958 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.299768 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.299927 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.302684 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.302757 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.307657 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.307838 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.310716 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.310823 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.311086 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.313617 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.313756 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.316557 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.316624 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.319446 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.319469 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-l96rs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.319626 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.321934 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s5mfg"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.321956 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.321968 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mg52n"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.322093 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.324587 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ft42n"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.324726 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327289 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-glkw9"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327312 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xn6qp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327323 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327335 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327346 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327357 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327369 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327379 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327394 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327405 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-8622t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327417 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327431 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ztcgs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327442 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ft42n"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327452 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327462 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ljj2s"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327473 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327482 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-78z8z"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327495 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327418 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327505 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327516 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327527 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.327539 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ttcl5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329439 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329637 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b472c-53e1-402a-ad30-244ea317f0e1-config\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329676 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6f108d0-ed4b-4318-bd96-7de2824bf73e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329707 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329727 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit-dir\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329746 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d5bxk\" (UniqueName: \"kubernetes.io/projected/603cfb78-063c-444d-8434-38e8ff6b5f70-kube-api-access-d5bxk\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329766 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329783 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-tmp-dir\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329814 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329832 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-images\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329851 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329869 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-config\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329885 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-encryption-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329943 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f7f00b-d69c-4a82-934c-025eb1500a33-serving-cert\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329969 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsm7d\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-kube-api-access-fsm7d\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.329990 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cnsw2\" (UniqueName: \"kubernetes.io/projected/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-kube-api-access-cnsw2\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330010 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330029 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-serving-cert\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330085 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bh7f\" (UniqueName: \"kubernetes.io/projected/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-kube-api-access-4bh7f\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330108 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330127 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-serving-cert\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330146 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-client\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330203 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-dir\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330244 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-console-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330268 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330297 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330316 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330341 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-oauth-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.330527 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.331374 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit-dir\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.331485 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-dir\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332574 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-images\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332686 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-config\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332755 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332788 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332825 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-serving-cert\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332860 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86w69\" (UniqueName: \"kubernetes.io/projected/dd3b472c-53e1-402a-ad30-244ea317f0e1-kube-api-access-86w69\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332926 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g768j\" (UniqueName: \"kubernetes.io/projected/3b28944b-12d3-4087-b906-99fbf2937724-kube-api-access-g768j\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332958 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrc9m\" (UniqueName: \"kubernetes.io/projected/c6f108d0-ed4b-4318-bd96-7de2824bf73e-kube-api-access-rrc9m\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332982 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4mq5\" (UniqueName: \"kubernetes.io/projected/9aa837bd-63fc-4bb8-b158-d8632117a117-kube-api-access-k4mq5\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.332979 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333005 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b28944b-12d3-4087-b906-99fbf2937724-available-featuregates\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333031 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-auth-proxy-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333069 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333097 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-config\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333117 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333146 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333165 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b472c-53e1-402a-ad30-244ea317f0e1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333197 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-config\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333224 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333237 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333248 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333313 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd3b472c-53e1-402a-ad30-244ea317f0e1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333338 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676675d9-dafb-4b30-ad88-bea33cf42ce0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333383 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b28944b-12d3-4087-b906-99fbf2937724-serving-cert\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333405 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-trusted-ca\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333421 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676675d9-dafb-4b30-ad88-bea33cf42ce0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333464 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333484 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-console-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333502 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333521 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-oauth-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333559 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz78h\" (UniqueName: \"kubernetes.io/projected/10472dc9-9bed-4d08-811a-76a55f0d6cf4-kube-api-access-gz78h\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333616 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333633 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-59xcc"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333648 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksglj\" (UniqueName: \"kubernetes.io/projected/0386fc07-a367-4188-8fab-3ce5d14ad6f2-kube-api-access-ksglj\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333678 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10472dc9-9bed-4d08-811a-76a55f0d6cf4-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333713 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-encryption-config\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333741 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10472dc9-9bed-4d08-811a-76a55f0d6cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333766 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-machine-approver-tls\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333778 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333791 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676675d9-dafb-4b30-ad88-bea33cf42ce0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333825 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmb7q\" (UniqueName: \"kubernetes.io/projected/b9ac66ad-91ae-4ffd-b159-a7549ca71803-kube-api-access-zmb7q\") pod \"downloads-747b44746d-ljj2s\" (UID: \"b9ac66ad-91ae-4ffd-b159-a7549ca71803\") " pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333854 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333912 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-config\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333938 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-policies\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333966 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/676675d9-dafb-4b30-ad88-bea33cf42ce0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333998 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334025 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-image-import-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334048 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-serving-ca\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334081 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334104 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334126 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfb78-063c-444d-8434-38e8ff6b5f70-serving-cert\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334149 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334175 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-client\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334195 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-oauth-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334221 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334254 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334264 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-trusted-ca-bundle\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334280 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-config\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334293 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334332 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4b7\" (UniqueName: \"kubernetes.io/projected/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-kube-api-access-sc4b7\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334365 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334393 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-service-ca\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334418 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334456 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334479 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-client\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334504 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qph7v\" (UniqueName: \"kubernetes.io/projected/72f63421-cfe9-45f8-85fe-b779a81a7ebb-kube-api-access-qph7v\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334590 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.334621 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xckx\" (UniqueName: \"kubernetes.io/projected/26f7f00b-d69c-4a82-934c-025eb1500a33-kube-api-access-8xckx\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337061 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-serving-cert\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337187 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6f108d0-ed4b-4318-bd96-7de2824bf73e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337383 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-serving-cert\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.333070 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337965 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-config\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.337990 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/72f63421-cfe9-45f8-85fe-b779a81a7ebb-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.338032 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-serving-ca\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.338746 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.338922 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-service-ca\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339122 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/603cfb78-063c-444d-8434-38e8ff6b5f70-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339219 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-image-import-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339507 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339659 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-serving-cert\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.339993 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0386fc07-a367-4188-8fab-3ce5d14ad6f2-audit-policies\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340004 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-encryption-config\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340113 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b28944b-12d3-4087-b906-99fbf2937724-serving-cert\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340298 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-etcd-client\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340484 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340498 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3b28944b-12d3-4087-b906-99fbf2937724-available-featuregates\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.340503 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.341677 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-audit\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.341840 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342100 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f108d0-ed4b-4318-bd96-7de2824bf73e-config\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342276 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f63421-cfe9-45f8-85fe-b779a81a7ebb-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342392 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342416 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342428 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342440 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342451 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342463 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342474 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342484 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ttcl5"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342496 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-l96rs"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342506 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-59xcc"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342514 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342525 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342535 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342545 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342557 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342668 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9aa837bd-63fc-4bb8-b158-d8632117a117-console-oauth-config\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.342790 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.343162 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.344740 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0386fc07-a367-4188-8fab-3ce5d14ad6f2-encryption-config\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.344884 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.345595 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.346116 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.347669 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603cfb78-063c-444d-8434-38e8ff6b5f70-serving-cert\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.349868 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72f63421-cfe9-45f8-85fe-b779a81a7ebb-etcd-client\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.350624 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa837bd-63fc-4bb8-b158-d8632117a117-trusted-ca-bundle\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.350796 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.350963 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.351182 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.351835 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.354051 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.356762 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.372568 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.389120 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.409539 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.430120 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.435352 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-auth-proxy-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.435466 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-config\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.435581 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436180 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b472c-53e1-402a-ad30-244ea317f0e1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436278 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-config\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436457 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd3b472c-53e1-402a-ad30-244ea317f0e1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436627 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676675d9-dafb-4b30-ad88-bea33cf42ce0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436705 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-trusted-ca\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436785 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676675d9-dafb-4b30-ad88-bea33cf42ce0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436913 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gz78h\" (UniqueName: \"kubernetes.io/projected/10472dc9-9bed-4d08-811a-76a55f0d6cf4-kube-api-access-gz78h\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437006 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10472dc9-9bed-4d08-811a-76a55f0d6cf4-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437083 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10472dc9-9bed-4d08-811a-76a55f0d6cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437152 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-machine-approver-tls\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437216 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676675d9-dafb-4b30-ad88-bea33cf42ce0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437288 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmb7q\" (UniqueName: \"kubernetes.io/projected/b9ac66ad-91ae-4ffd-b159-a7549ca71803-kube-api-access-zmb7q\") pod \"downloads-747b44746d-ljj2s\" (UID: \"b9ac66ad-91ae-4ffd-b159-a7549ca71803\") " pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437358 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437430 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/676675d9-dafb-4b30-ad88-bea33cf42ce0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437517 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437583 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-client\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437663 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437717 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-config\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.436287 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-auth-proxy-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437737 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4b7\" (UniqueName: \"kubernetes.io/projected/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-kube-api-access-sc4b7\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437920 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.438044 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xckx\" (UniqueName: \"kubernetes.io/projected/26f7f00b-d69c-4a82-934c-025eb1500a33-kube-api-access-8xckx\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.438139 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b472c-53e1-402a-ad30-244ea317f0e1-config\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.439975 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440025 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-tmp-dir\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440126 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f7f00b-d69c-4a82-934c-025eb1500a33-serving-cert\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440167 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fsm7d\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-kube-api-access-fsm7d\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440214 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4bh7f\" (UniqueName: \"kubernetes.io/projected/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-kube-api-access-4bh7f\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440288 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-serving-cert\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.440326 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-86w69\" (UniqueName: \"kubernetes.io/projected/dd3b472c-53e1-402a-ad30-244ea317f0e1-kube-api-access-86w69\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.439286 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd3b472c-53e1-402a-ad30-244ea317f0e1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441159 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.439709 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/676675d9-dafb-4b30-ad88-bea33cf42ce0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441361 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b472c-53e1-402a-ad30-244ea317f0e1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.437023 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-config\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441502 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-tmp-dir\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441825 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26f7f00b-d69c-4a82-934c-025eb1500a33-trusted-ca\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.441836 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/676675d9-dafb-4b30-ad88-bea33cf42ce0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.442195 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.443028 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.443661 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/10472dc9-9bed-4d08-811a-76a55f0d6cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.444510 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.444518 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-config\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.444970 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b472c-53e1-402a-ad30-244ea317f0e1-config\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.445088 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/676675d9-dafb-4b30-ad88-bea33cf42ce0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.445273 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f7f00b-d69c-4a82-934c-025eb1500a33-serving-cert\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.450618 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-machine-approver-tls\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.452800 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-serving-cert\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.453404 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.469992 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.476217 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-etcd-client\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.510608 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.517034 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/10472dc9-9bed-4d08-811a-76a55f0d6cf4-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.530948 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.550635 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.570069 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.589939 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.610616 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.630089 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.650431 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.679701 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.690319 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.709918 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.730331 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.750019 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.769863 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.790623 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.810382 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.829949 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.851342 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.870789 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.891404 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.910133 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.929910 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.950776 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.971476 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:10:06 crc kubenswrapper[5115]: I0120 09:10:06.990850 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.012575 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.032349 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.051202 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.071803 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.091396 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.110944 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.130696 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.150466 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.169857 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.210807 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.215869 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.231455 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.250936 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.270006 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.288258 5115 request.go:752] "Waited before sending request" delay="1.010586692s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.289832 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.311229 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.330080 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.350343 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.369801 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.390836 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.410284 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.429204 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.451203 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.469311 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.489482 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.509601 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.530013 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.549937 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.571525 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.590064 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.609568 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.629504 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.650117 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.669450 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.690124 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.710194 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.729734 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.750876 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.770119 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.790978 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.811260 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.830348 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.861523 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.869729 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.890118 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.910198 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.929999 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:07 crc kubenswrapper[5115]: I0120 09:10:07.950534 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.256017 5115 request.go:752] "Waited before sending request" delay="2.933686479s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.259967 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.292460 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.293118 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.293997 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.294304 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.306522 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.308635 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.308880 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.308959 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnsw2\" (UniqueName: \"kubernetes.io/projected/09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec-kube-api-access-cnsw2\") pod \"openshift-apiserver-operator-846cbfc458-5494g\" (UID: \"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309082 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309241 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309432 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309593 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309795 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309957 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310274 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310376 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.309971 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310813 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.310881 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.311159 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.311354 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.313102 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.314257 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/676675d9-dafb-4b30-ad88-bea33cf42ce0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-777zr\" (UID: \"676675d9-dafb-4b30-ad88-bea33cf42ce0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.315624 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksglj\" (UniqueName: \"kubernetes.io/projected/0386fc07-a367-4188-8fab-3ce5d14ad6f2-kube-api-access-ksglj\") pod \"apiserver-8596bd845d-4x4rk\" (UID: \"0386fc07-a367-4188-8fab-3ce5d14ad6f2\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.315722 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-86w69\" (UniqueName: \"kubernetes.io/projected/dd3b472c-53e1-402a-ad30-244ea317f0e1-kube-api-access-86w69\") pod \"openshift-controller-manager-operator-686468bdd5-s85qm\" (UID: \"dd3b472c-53e1-402a-ad30-244ea317f0e1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.317171 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"oauth-openshift-66458b6674-c88bx\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.317431 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrc9m\" (UniqueName: \"kubernetes.io/projected/c6f108d0-ed4b-4318-bd96-7de2824bf73e-kube-api-access-rrc9m\") pod \"machine-api-operator-755bb95488-2vzsk\" (UID: \"c6f108d0-ed4b-4318-bd96-7de2824bf73e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.320280 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.330784 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph7v\" (UniqueName: \"kubernetes.io/projected/72f63421-cfe9-45f8-85fe-b779a81a7ebb-kube-api-access-qph7v\") pod \"apiserver-9ddfb9f55-xn6qp\" (UID: \"72f63421-cfe9-45f8-85fe-b779a81a7ebb\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.331102 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.331165 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xckx\" (UniqueName: \"kubernetes.io/projected/26f7f00b-d69c-4a82-934c-025eb1500a33-kube-api-access-8xckx\") pod \"console-operator-67c89758df-glkw9\" (UID: \"26f7f00b-d69c-4a82-934c-025eb1500a33\") " pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.334401 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bh7f\" (UniqueName: \"kubernetes.io/projected/472e0bfa-47b1-4a6c-8fd5-3c5a0865c001-kube-api-access-4bh7f\") pod \"etcd-operator-69b85846b6-bxhkt\" (UID: \"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.335078 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g768j\" (UniqueName: \"kubernetes.io/projected/3b28944b-12d3-4087-b906-99fbf2937724-kube-api-access-g768j\") pod \"openshift-config-operator-5777786469-s5mfg\" (UID: \"3b28944b-12d3-4087-b906-99fbf2937724\") " pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.338189 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.342973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4b7\" (UniqueName: \"kubernetes.io/projected/45b3a05c-a4a6-4e67-9c8f-c914c93cb801-kube-api-access-sc4b7\") pod \"machine-approver-54c688565-6lm7w\" (UID: \"45b3a05c-a4a6-4e67-9c8f-c914c93cb801\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.343766 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsm7d\" (UniqueName: \"kubernetes.io/projected/d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf-kube-api-access-fsm7d\") pod \"ingress-operator-6b9cb4dbcf-5rdz6\" (UID: \"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.343766 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz78h\" (UniqueName: \"kubernetes.io/projected/10472dc9-9bed-4d08-811a-76a55f0d6cf4-kube-api-access-gz78h\") pod \"machine-config-controller-f9cdd68f7-7ntwm\" (UID: \"10472dc9-9bed-4d08-811a-76a55f0d6cf4\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.344482 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4mq5\" (UniqueName: \"kubernetes.io/projected/9aa837bd-63fc-4bb8-b158-d8632117a117-kube-api-access-k4mq5\") pod \"console-64d44f6ddf-78z8z\" (UID: \"9aa837bd-63fc-4bb8-b158-d8632117a117\") " pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.345197 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5bxk\" (UniqueName: \"kubernetes.io/projected/603cfb78-063c-444d-8434-38e8ff6b5f70-kube-api-access-d5bxk\") pod \"authentication-operator-7f5c659b84-pss2p\" (UID: \"603cfb78-063c-444d-8434-38e8ff6b5f70\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.354680 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmb7q\" (UniqueName: \"kubernetes.io/projected/b9ac66ad-91ae-4ffd-b159-a7549ca71803-kube-api-access-zmb7q\") pod \"downloads-747b44746d-ljj2s\" (UID: \"b9ac66ad-91ae-4ffd-b159-a7549ca71803\") " pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.384763 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.384825 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390008 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390312 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390456 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390605 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390695 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.390980 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.391003 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.890983957 +0000 UTC m=+120.059762487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.391378 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.434210 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.441917 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.475375 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.488296 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.494785 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496498 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.496532 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:09.996479245 +0000 UTC m=+120.165257775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496799 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496856 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecb1b469-4758-499e-a0ba-8204058552be-tmp-dir\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496885 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496941 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.496956 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-webhook-certs\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497009 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba355-2c21-431c-8767-821fb9075e1c-tmpfs\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497026 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-socket-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497048 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-config-volume\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497087 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e183fd-a881-4f61-a726-bcaaf60e71d5-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497107 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac548cbe-da92-4dd6-bd33-705689710018-tmp-dir\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497171 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-images\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497187 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-plugins-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497201 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrql\" (UniqueName: \"kubernetes.io/projected/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-kube-api-access-wmrql\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497247 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-srv-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497266 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497327 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-stats-auth\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497371 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497795 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497829 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4cwq\" (UniqueName: \"kubernetes.io/projected/3b4463ed-eba2-4ba4-afb8-2424e957fc37-kube-api-access-h4cwq\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497859 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-tmp-dir\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497880 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497928 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497954 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.497979 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498000 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4cj7\" (UniqueName: \"kubernetes.io/projected/f7ec9898-6747-40af-be60-ce1289d0a4e6-kube-api-access-f4cj7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498021 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498045 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498068 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b967aa59-3ad8-4a80-a870-970c4166dd31-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498636 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-csi-data-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498679 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498701 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498733 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498755 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498776 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d738dd6-3c15-4131-837d-591792cb41cd-service-ca-bundle\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.498888 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499092 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbc48af4-261d-4599-a7fd-edd26b2b4022-cert\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499114 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-metrics-certs\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499153 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499178 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e183fd-a881-4f61-a726-bcaaf60e71d5-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499200 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6l82\" (UniqueName: \"kubernetes.io/projected/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-kube-api-access-g6l82\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499231 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499256 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499278 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499410 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssv77\" (UniqueName: \"kubernetes.io/projected/fbc48af4-261d-4599-a7fd-edd26b2b4022-kube-api-access-ssv77\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499456 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-tmpfs\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499494 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-webhook-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.499516 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.501074 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.001053407 +0000 UTC m=+120.169831997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.502197 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506392 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506463 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1b469-4758-499e-a0ba-8204058552be-config\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506539 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506644 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506666 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506685 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-srv-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506708 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgv4\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-kube-api-access-gfgv4\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506736 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-node-bootstrap-token\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506758 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfxc\" (UniqueName: \"kubernetes.io/projected/a8dd6004-2cc4-4971-9dcb-18d8871286b8-kube-api-access-znfxc\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506843 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ec9898-6747-40af-be60-ce1289d0a4e6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506875 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-metrics-tls\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506930 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtmj\" (UniqueName: \"kubernetes.io/projected/0d738dd6-3c15-4131-837d-591792cb41cd-kube-api-access-kmtmj\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506953 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/118decd3-a665-4997-bd40-0f68d2295238-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.506989 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-apiservice-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507007 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-key\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507111 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-tmpfs\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507134 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6477423-4b0a-43d7-9514-bde25388af77-config\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507157 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507177 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-registration-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507217 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507242 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41303d0-06e3-4554-8fa9-d9dd935d0bec-serving-cert\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507263 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9cmd\" (UniqueName: \"kubernetes.io/projected/8c6ba355-2c21-431c-8767-821fb9075e1c-kube-api-access-r9cmd\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507277 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507542 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.507290 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508098 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/31a102f9-d392-481f-85f7-4be9117cd31d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508153 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41303d0-06e3-4554-8fa9-d9dd935d0bec-config\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508175 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508196 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7kxc\" (UniqueName: \"kubernetes.io/projected/b967aa59-3ad8-4a80-a870-970c4166dd31-kube-api-access-v7kxc\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508268 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508290 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5kpq\" (UniqueName: \"kubernetes.io/projected/118decd3-a665-4997-bd40-0f68d2295238-kube-api-access-z5kpq\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508312 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-default-certificate\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508357 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac548cbe-da92-4dd6-bd33-705689710018-metrics-tls\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508389 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb1b469-4758-499e-a0ba-8204058552be-kube-api-access\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508417 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nzmj\" (UniqueName: \"kubernetes.io/projected/31a102f9-d392-481f-85f7-4be9117cd31d-kube-api-access-4nzmj\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508432 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508450 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-certs\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508479 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21e183fd-a881-4f61-a726-bcaaf60e71d5-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508496 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpl59\" (UniqueName: \"kubernetes.io/projected/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-kube-api-access-bpl59\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508562 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-cabundle\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508585 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb7kn\" (UniqueName: \"kubernetes.io/projected/f6477423-4b0a-43d7-9514-bde25388af77-kube-api-access-hb7kn\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508604 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p86sf\" (UniqueName: \"kubernetes.io/projected/01855721-bd0b-4ddc-91d0-be658345b9c5-kube-api-access-p86sf\") pod \"migrator-866fcbc849-xtwqk\" (UID: \"01855721-bd0b-4ddc-91d0-be658345b9c5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508625 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e183fd-a881-4f61-a726-bcaaf60e71d5-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508645 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1b469-4758-499e-a0ba-8204058552be-serving-cert\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508661 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.508714 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6477423-4b0a-43d7-9514-bde25388af77-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511375 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-mountpoint-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511426 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511449 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.511489 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.512312 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.512812 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.512878 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nz27\" (UniqueName: \"kubernetes.io/projected/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-kube-api-access-2nz27\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513219 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7857\" (UniqueName: \"kubernetes.io/projected/ac548cbe-da92-4dd6-bd33-705689710018-kube-api-access-k7857\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513470 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8c4\" (UniqueName: \"kubernetes.io/projected/f41303d0-06e3-4554-8fa9-d9dd935d0bec-kube-api-access-8b8c4\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513627 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrkgx\" (UniqueName: \"kubernetes.io/projected/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-kube-api-access-nrkgx\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.513688 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.515948 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.518639 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.528375 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.530435 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.533587 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.539235 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.539349 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.553039 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.561180 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.570969 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.581275 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.603522 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.624871 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625086 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625139 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ssv77\" (UniqueName: \"kubernetes.io/projected/fbc48af4-261d-4599-a7fd-edd26b2b4022-kube-api-access-ssv77\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.625158 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-tmpfs\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.625293 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.125266505 +0000 UTC m=+120.294045035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626107 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626110 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-webhook-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626165 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626193 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626211 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1b469-4758-499e-a0ba-8204058552be-config\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626226 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626247 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626263 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-srv-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626281 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgv4\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-kube-api-access-gfgv4\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626299 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-node-bootstrap-token\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626317 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-znfxc\" (UniqueName: \"kubernetes.io/projected/a8dd6004-2cc4-4971-9dcb-18d8871286b8-kube-api-access-znfxc\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626340 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ec9898-6747-40af-be60-ce1289d0a4e6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626363 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-metrics-tls\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626380 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmtmj\" (UniqueName: \"kubernetes.io/projected/0d738dd6-3c15-4131-837d-591792cb41cd-kube-api-access-kmtmj\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626398 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/118decd3-a665-4997-bd40-0f68d2295238-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626416 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-apiservice-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626434 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-key\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626459 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-tmpfs\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626476 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6477423-4b0a-43d7-9514-bde25388af77-config\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626493 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626511 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-registration-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626534 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41303d0-06e3-4554-8fa9-d9dd935d0bec-serving-cert\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626552 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9cmd\" (UniqueName: \"kubernetes.io/projected/8c6ba355-2c21-431c-8767-821fb9075e1c-kube-api-access-r9cmd\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626573 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626601 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/31a102f9-d392-481f-85f7-4be9117cd31d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626622 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41303d0-06e3-4554-8fa9-d9dd935d0bec-config\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626638 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626655 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v7kxc\" (UniqueName: \"kubernetes.io/projected/b967aa59-3ad8-4a80-a870-970c4166dd31-kube-api-access-v7kxc\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626675 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626692 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kpq\" (UniqueName: \"kubernetes.io/projected/118decd3-a665-4997-bd40-0f68d2295238-kube-api-access-z5kpq\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626712 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-default-certificate\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626727 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac548cbe-da92-4dd6-bd33-705689710018-metrics-tls\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626751 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb1b469-4758-499e-a0ba-8204058552be-kube-api-access\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626769 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4nzmj\" (UniqueName: \"kubernetes.io/projected/31a102f9-d392-481f-85f7-4be9117cd31d-kube-api-access-4nzmj\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626785 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626800 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-certs\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626818 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21e183fd-a881-4f61-a726-bcaaf60e71d5-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626835 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bpl59\" (UniqueName: \"kubernetes.io/projected/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-kube-api-access-bpl59\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626851 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-cabundle\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.626879 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hb7kn\" (UniqueName: \"kubernetes.io/projected/f6477423-4b0a-43d7-9514-bde25388af77-kube-api-access-hb7kn\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.632118 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-webhook-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637317 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p86sf\" (UniqueName: \"kubernetes.io/projected/01855721-bd0b-4ddc-91d0-be658345b9c5-kube-api-access-p86sf\") pod \"migrator-866fcbc849-xtwqk\" (UID: \"01855721-bd0b-4ddc-91d0-be658345b9c5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637383 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e183fd-a881-4f61-a726-bcaaf60e71d5-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637410 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1b469-4758-499e-a0ba-8204058552be-serving-cert\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637432 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637475 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6477423-4b0a-43d7-9514-bde25388af77-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637499 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-mountpoint-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637535 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637555 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz27\" (UniqueName: \"kubernetes.io/projected/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-kube-api-access-2nz27\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637564 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/118decd3-a665-4997-bd40-0f68d2295238-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637594 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7857\" (UniqueName: \"kubernetes.io/projected/ac548cbe-da92-4dd6-bd33-705689710018-kube-api-access-k7857\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637640 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8c4\" (UniqueName: \"kubernetes.io/projected/f41303d0-06e3-4554-8fa9-d9dd935d0bec-kube-api-access-8b8c4\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637712 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrkgx\" (UniqueName: \"kubernetes.io/projected/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-kube-api-access-nrkgx\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637737 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637802 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637840 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecb1b469-4758-499e-a0ba-8204058552be-tmp-dir\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637869 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637908 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637932 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-webhook-certs\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637976 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba355-2c21-431c-8767-821fb9075e1c-tmpfs\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.637993 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-socket-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638020 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-config-volume\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638038 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e183fd-a881-4f61-a726-bcaaf60e71d5-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638054 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac548cbe-da92-4dd6-bd33-705689710018-tmp-dir\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638071 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-images\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638087 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-plugins-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638105 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrql\" (UniqueName: \"kubernetes.io/projected/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-kube-api-access-wmrql\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638121 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-srv-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638137 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638176 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-stats-auth\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638212 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638237 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638259 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4cwq\" (UniqueName: \"kubernetes.io/projected/3b4463ed-eba2-4ba4-afb8-2424e957fc37-kube-api-access-h4cwq\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638286 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-tmp-dir\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638301 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638326 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638352 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638374 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4cj7\" (UniqueName: \"kubernetes.io/projected/f7ec9898-6747-40af-be60-ce1289d0a4e6-kube-api-access-f4cj7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638406 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638426 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638442 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b967aa59-3ad8-4a80-a870-970c4166dd31-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638474 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-csi-data-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638500 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638517 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638537 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638554 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638573 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d738dd6-3c15-4131-837d-591792cb41cd-service-ca-bundle\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638605 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638623 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbc48af4-261d-4599-a7fd-edd26b2b4022-cert\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638640 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-metrics-certs\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638662 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e183fd-a881-4f61-a726-bcaaf60e71d5-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.638679 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6l82\" (UniqueName: \"kubernetes.io/projected/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-kube-api-access-g6l82\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.639202 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.639505 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecb1b469-4758-499e-a0ba-8204058552be-tmp-dir\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640115 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-tmp-dir\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640124 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640622 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.640852 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.642295 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.642364 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-mountpoint-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.642565 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-plugins-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.643301 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8c6ba355-2c21-431c-8767-821fb9075e1c-tmpfs\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.643393 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-socket-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645086 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac548cbe-da92-4dd6-bd33-705689710018-tmp-dir\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645678 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-config-volume\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645838 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.645870 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6477423-4b0a-43d7-9514-bde25388af77-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.646305 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/118decd3-a665-4997-bd40-0f68d2295238-images\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.647050 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.647277 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e183fd-a881-4f61-a726-bcaaf60e71d5-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.648031 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.648373 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.648747 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.649099 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.650032 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d738dd6-3c15-4131-837d-591792cb41cd-service-ca-bundle\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.651138 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-csi-data-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.652253 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1b469-4758-499e-a0ba-8204058552be-serving-cert\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.653375 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.653744 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-webhook-certs\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.654524 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655222 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-tmpfs\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655459 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655525 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8dd6004-2cc4-4971-9dcb-18d8871286b8-registration-dir\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.655665 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.657065 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.657843 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e183fd-a881-4f61-a726-bcaaf60e71d5-config\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658177 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658553 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-tmpfs\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658998 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.658996 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-apiservice-cert\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.659471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1b469-4758-499e-a0ba-8204058552be-config\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.660170 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.661585 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.662503 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-cabundle\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.662723 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/21e183fd-a881-4f61-a726-bcaaf60e71d5-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.673277 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-stats-auth\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.673664 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.673721 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fbc48af4-261d-4599-a7fd-edd26b2b4022-cert\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674146 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41303d0-06e3-4554-8fa9-d9dd935d0bec-config\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674272 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674403 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41303d0-06e3-4554-8fa9-d9dd935d0bec-serving-cert\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.674750 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8c6ba355-2c21-431c-8767-821fb9075e1c-srv-cert\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.676215 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6477423-4b0a-43d7-9514-bde25388af77-config\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.677401 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b4463ed-eba2-4ba4-afb8-2424e957fc37-signing-key\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.682034 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.682625 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-certs\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.684062 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"marketplace-operator-547dbd544d-9gfdh\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.685487 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-metrics-certs\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.685536 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b967aa59-3ad8-4a80-a870-970c4166dd31-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.688506 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7857\" (UniqueName: \"kubernetes.io/projected/ac548cbe-da92-4dd6-bd33-705689710018-kube-api-access-k7857\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.691418 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.692687 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-metrics-tls\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.692888 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d738dd6-3c15-4131-837d-591792cb41cd-default-certificate\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.694007 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7ec9898-6747-40af-be60-ce1289d0a4e6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.695718 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-node-bootstrap-token\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.696264 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/31a102f9-d392-481f-85f7-4be9117cd31d-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.699688 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-srv-cert\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.701367 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac548cbe-da92-4dd6-bd33-705689710018-metrics-tls\") pod \"dns-operator-799b87ffcd-8622t\" (UID: \"ac548cbe-da92-4dd6-bd33-705689710018\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.701500 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.710201 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6l82\" (UniqueName: \"kubernetes.io/projected/6008f0e6-56c0-4fdd-89b8-0649fb365b0f-kube-api-access-g6l82\") pod \"multus-admission-controller-69db94689b-ztcgs\" (UID: \"6008f0e6-56c0-4fdd-89b8-0649fb365b0f\") " pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.714578 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8c4\" (UniqueName: \"kubernetes.io/projected/f41303d0-06e3-4554-8fa9-d9dd935d0bec-kube-api-access-8b8c4\") pod \"service-ca-operator-5b9c976747-9hn8c\" (UID: \"f41303d0-06e3-4554-8fa9-d9dd935d0bec\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.728293 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrkgx\" (UniqueName: \"kubernetes.io/projected/d702c0ea-d2bd-41dc-9a3a-39caacbb288d-kube-api-access-nrkgx\") pod \"packageserver-7d4fc7d867-smr5d\" (UID: \"d702c0ea-d2bd-41dc-9a3a-39caacbb288d\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.740817 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.742070 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.242053105 +0000 UTC m=+120.410831635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.752785 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"controller-manager-65b6cccf98-lg8fb\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.787195 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.808723 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p86sf\" (UniqueName: \"kubernetes.io/projected/01855721-bd0b-4ddc-91d0-be658345b9c5-kube-api-access-p86sf\") pod \"migrator-866fcbc849-xtwqk\" (UID: \"01855721-bd0b-4ddc-91d0-be658345b9c5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.811252 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"cni-sysctl-allowlist-ds-pkz7s\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.824192 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.828581 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21e183fd-a881-4f61-a726-bcaaf60e71d5-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-rjtnv\" (UID: \"21e183fd-a881-4f61-a726-bcaaf60e71d5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.844446 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.845124 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.345107747 +0000 UTC m=+120.513886277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.847826 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4cj7\" (UniqueName: \"kubernetes.io/projected/f7ec9898-6747-40af-be60-ce1289d0a4e6-kube-api-access-f4cj7\") pod \"control-plane-machine-set-operator-75ffdb6fcd-69gcn\" (UID: \"f7ec9898-6747-40af-be60-ce1289d0a4e6\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.861265 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.872043 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4cwq\" (UniqueName: \"kubernetes.io/projected/3b4463ed-eba2-4ba4-afb8-2424e957fc37-kube-api-access-h4cwq\") pod \"service-ca-74545575db-l96rs\" (UID: \"3b4463ed-eba2-4ba4-afb8-2424e957fc37\") " pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.872310 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.887368 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.892467 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"collect-profiles-29481660-hh6m6\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.902009 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-l96rs" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.913090 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz27\" (UniqueName: \"kubernetes.io/projected/273a5bb6-cb84-41ee-a44a-ee5bc13291f5-kube-api-access-2nz27\") pod \"machine-config-server-mg52n\" (UID: \"273a5bb6-cb84-41ee-a44a-ee5bc13291f5\") " pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.918841 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mg52n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.928659 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kpq\" (UniqueName: \"kubernetes.io/projected/118decd3-a665-4997-bd40-0f68d2295238-kube-api-access-z5kpq\") pod \"machine-config-operator-67c9d58cbb-m6g4t\" (UID: \"118decd3-a665-4997-bd40-0f68d2295238\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.946742 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xn6qp"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.946947 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:09 crc kubenswrapper[5115]: E0120 09:10:09.947569 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.447548392 +0000 UTC m=+120.616326912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.955680 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssv77\" (UniqueName: \"kubernetes.io/projected/fbc48af4-261d-4599-a7fd-edd26b2b4022-kube-api-access-ssv77\") pod \"ingress-canary-ft42n\" (UID: \"fbc48af4-261d-4599-a7fd-edd26b2b4022\") " pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.962607 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.962816 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-2vzsk"] Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.971931 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrql\" (UniqueName: \"kubernetes.io/projected/d60eae6f-6fe4-41cd-8c8f-54749aacc87e-kube-api-access-wmrql\") pod \"dns-default-59xcc\" (UID: \"d60eae6f-6fe4-41cd-8c8f-54749aacc87e\") " pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:09 crc kubenswrapper[5115]: I0120 09:10:09.989707 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"route-controller-manager-776cdc94d6-jxpqr\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.000767 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.006388 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.010809 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.026624 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nzmj\" (UniqueName: \"kubernetes.io/projected/31a102f9-d392-481f-85f7-4be9117cd31d-kube-api-access-4nzmj\") pod \"cluster-samples-operator-6b564684c8-lcng5\" (UID: \"31a102f9-d392-481f-85f7-4be9117cd31d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.033416 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.049954 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.050118 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.050164 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.050260 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.055886 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7kxc\" (UniqueName: \"kubernetes.io/projected/b967aa59-3ad8-4a80-a870-970c4166dd31-kube-api-access-v7kxc\") pod \"package-server-manager-77f986bd66-gc77j\" (UID: \"b967aa59-3ad8-4a80-a870-970c4166dd31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.059287 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.559245505 +0000 UTC m=+120.728024035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.059854 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.065652 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.078703 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpl59\" (UniqueName: \"kubernetes.io/projected/80f8b6d4-7eb4-42ec-9976-60dc6db3148f-kube-api-access-bpl59\") pod \"catalog-operator-75ff9f647d-mfd49\" (UID: \"80f8b6d4-7eb4-42ec-9976-60dc6db3148f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.085048 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.086974 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9cmd\" (UniqueName: \"kubernetes.io/projected/8c6ba355-2c21-431c-8767-821fb9075e1c-kube-api-access-r9cmd\") pod \"olm-operator-5cdf44d969-95nt8\" (UID: \"8c6ba355-2c21-431c-8767-821fb9075e1c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.168593 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.168996 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.170528 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.170685 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.170876 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.171447 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.671432432 +0000 UTC m=+120.840210962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.186711 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgv4\" (UniqueName: \"kubernetes.io/projected/ef29fedc-43ad-4cf5-b3ef-10a28c46842f-kube-api-access-gfgv4\") pod \"cluster-image-registry-operator-86c45576b9-h9rh5\" (UID: \"ef29fedc-43ad-4cf5-b3ef-10a28c46842f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.186946 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.197478 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.213171 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.213223 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.215840 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.225140 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.229820 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.238149 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.251612 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.274971 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.277565 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.287308 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.288448 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.289223 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.789195718 +0000 UTC m=+120.957974268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.311457 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.331914 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.332315 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.333465 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.335990 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.337143 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.338101 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.340166 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.351697 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.352137 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.359397 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerStarted","Data":"857692043d4e2a0e52ae73c61d049790e037f8377cfd4c3084e2ea0725ae7c00"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.364412 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ecb1b469-4758-499e-a0ba-8204058552be-kube-api-access\") pod \"kube-apiserver-operator-575994946d-m6krp\" (UID: \"ecb1b469-4758-499e-a0ba-8204058552be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.370323 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerStarted","Data":"28881063122d7a14f5feacf8a2ef22fe6f63494735a9de7c64a1cb7fda57c7c1"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.370605 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.372714 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" event={"ID":"c6f108d0-ed4b-4318-bd96-7de2824bf73e","Type":"ContainerStarted","Data":"518c872fec22cdd51a60c393a62a1da97b3362200d0830aef601a474fdfaf4fa"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.376535 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" event={"ID":"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001","Type":"ContainerStarted","Data":"030b057e9627fccd8c29ccbdbe6505fc414132ec82d49743a05995e6e529362c"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.387127 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" event={"ID":"45b3a05c-a4a6-4e67-9c8f-c914c93cb801","Type":"ContainerStarted","Data":"d77de929bd750a51c458f2d847183c40a060993b3059e0085b0e307e7f3cd220"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.387249 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d8f5093-1a2e-4c32-8c74-b6cfb185cc99-metrics-certs\") pod \"network-metrics-daemon-tzrjx\" (UID: \"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99\") " pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.389988 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.390147 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.391348 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:10.891332075 +0000 UTC m=+121.060110605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.392021 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mg52n" event={"ID":"273a5bb6-cb84-41ee-a44a-ee5bc13291f5","Type":"ContainerStarted","Data":"96e6ecc379e774b84bc4108889c42fdb721fe098da26e1b5d8de869c31ec8352"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.396462 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" event={"ID":"10472dc9-9bed-4d08-811a-76a55f0d6cf4","Type":"ContainerStarted","Data":"3582dc0a54cf6707b7c404a3d8a5a811a81b42edeb4908a47674f0f62dcb4252"} Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.430475 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.433640 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ft42n" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.450329 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.463937 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb7kn\" (UniqueName: \"kubernetes.io/projected/f6477423-4b0a-43d7-9514-bde25388af77-kube-api-access-hb7kn\") pod \"kube-storage-version-migrator-operator-565b79b866-2pl95\" (UID: \"f6477423-4b0a-43d7-9514-bde25388af77\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.470887 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.474752 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.490931 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.497152 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.499931 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.500208 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.000191412 +0000 UTC m=+121.168969932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.512343 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.532759 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.540917 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfxc\" (UniqueName: \"kubernetes.io/projected/a8dd6004-2cc4-4971-9dcb-18d8871286b8-kube-api-access-znfxc\") pod \"csi-hostpathplugin-ttcl5\" (UID: \"a8dd6004-2cc4-4971-9dcb-18d8871286b8\") " pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.543759 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmtmj\" (UniqueName: \"kubernetes.io/projected/0d738dd6-3c15-4131-837d-591792cb41cd-kube-api-access-kmtmj\") pod \"router-default-68cf44c8b8-n9hxc\" (UID: \"0d738dd6-3c15-4131-837d-591792cb41cd\") " pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.571564 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.576346 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tzrjx" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.591264 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.599767 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.612634 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.613086 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.113073067 +0000 UTC m=+121.281851597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.656953 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.660102 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.698031 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.713650 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.713951 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.21392601 +0000 UTC m=+121.382704540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.714140 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.715632 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-78z8z"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.724930 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.727543 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-glkw9"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.748689 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-s5mfg"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.750424 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.758598 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.815379 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.816105 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.816501 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.316487428 +0000 UTC m=+121.485265958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.857076 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.864757 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ljj2s"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.870036 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-8622t"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.899634 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-l96rs"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.910222 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ztcgs"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.918002 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.918191 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.418158363 +0000 UTC m=+121.586936893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.918619 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:10 crc kubenswrapper[5115]: E0120 09:10:10.919010 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.418996785 +0000 UTC m=+121.587775315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.937869 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.945420 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.987641 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.995696 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:10:10 crc kubenswrapper[5115]: I0120 09:10:10.996487 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.021799 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.022259 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.522229262 +0000 UTC m=+121.691007792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.031417 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.105700 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-59xcc"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.126256 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.126651 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.62663638 +0000 UTC m=+121.795414910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: W0120 09:10:11.133213 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ac66ad_91ae_4ffd_b159_a7549ca71803.slice/crio-1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee WatchSource:0}: Error finding container 1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee: Status 404 returned error can't find the container with id 1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.143164 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.151507 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.162479 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j"] Jan 20 09:10:11 crc kubenswrapper[5115]: W0120 09:10:11.205226 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf41303d0_06e3_4554_8fa9_d9dd935d0bec.slice/crio-8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721 WatchSource:0}: Error finding container 8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721: Status 404 returned error can't find the container with id 8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721 Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.227716 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.228265 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.728233812 +0000 UTC m=+121.897012342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.329625 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.329988 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.829974349 +0000 UTC m=+121.998752879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.415267 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" event={"ID":"80f8b6d4-7eb4-42ec-9976-60dc6db3148f","Type":"ContainerStarted","Data":"6114d050ba7344d59c20b4fa5ae32d642e9f03de9e9fd3b6ffa138c4bb1446bc"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.420055 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" event={"ID":"3b28944b-12d3-4087-b906-99fbf2937724","Type":"ContainerStarted","Data":"9867aaf1ba54f7e1ce8f653f72cd6cf2e28d74cb1e668f9b7eeaed47fded789e"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.430855 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.431091 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:11.931073078 +0000 UTC m=+122.099851608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.448228 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mg52n" event={"ID":"273a5bb6-cb84-41ee-a44a-ee5bc13291f5","Type":"ContainerStarted","Data":"4605b88333a42c6e823c3d40d543d9980763fa08927d988bd0e2e56767eedd6a"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.450081 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" event={"ID":"676675d9-dafb-4b30-ad88-bea33cf42ce0","Type":"ContainerStarted","Data":"d6d1ac4732cac18428ca5e1d1a0149baceff522aaa8a04805ddda01d65ae2590"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.456028 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" event={"ID":"8c6ba355-2c21-431c-8767-821fb9075e1c","Type":"ContainerStarted","Data":"c816df60d98c33bc0e07d0d9de360f95708feb6803ec0bb65b3ab842fdaff3a3"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.462286 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" event={"ID":"10472dc9-9bed-4d08-811a-76a55f0d6cf4","Type":"ContainerStarted","Data":"72f38a0ec4f70000765596eb43cfb1e0c64fd21da9d939639f480b7449581947"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.481813 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.500627 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"18f912f06c59235f5286c2791410fc92fae0eb44ec230d126606b127da4b7da1"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.512878 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.522921 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" event={"ID":"6008f0e6-56c0-4fdd-89b8-0649fb365b0f","Type":"ContainerStarted","Data":"7493bb218232c14833a0d0e5ff7d7bb0ca7ac7cf70738d52fdfad65e8f29b11b"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.532037 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.532756 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.032722093 +0000 UTC m=+122.201500623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.549397 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-glkw9" event={"ID":"26f7f00b-d69c-4a82-934c-025eb1500a33","Type":"ContainerStarted","Data":"8169ee48989da8ea1ff65ce4251b7d218c2b534157c42ad297050c8c1d400ace"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.553362 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.553992 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerStarted","Data":"fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.555610 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.561693 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" event={"ID":"472e0bfa-47b1-4a6c-8fd5-3c5a0865c001","Type":"ContainerStarted","Data":"cb0da4370fa77a1149c8f2a607bf8df68c81e9f933d2b66a7582a5aa0c2c537e"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.569049 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-59xcc" event={"ID":"d60eae6f-6fe4-41cd-8c8f-54749aacc87e","Type":"ContainerStarted","Data":"f3ae81c048828e0c39763c124b388b8386275dc126be13f23cc4ccd2cea78545"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.585287 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" event={"ID":"f7ec9898-6747-40af-be60-ce1289d0a4e6","Type":"ContainerStarted","Data":"b9c1c69cae88c3eda2c866da436570e149ec0926e969e41af36f800b4b17e8d2"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.618744 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" event={"ID":"ac548cbe-da92-4dd6-bd33-705689710018","Type":"ContainerStarted","Data":"b189332696850039ab1e02dbf24c0846f856d8e8e03a2617ef610a91dc248488"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.642194 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-78z8z" event={"ID":"9aa837bd-63fc-4bb8-b158-d8632117a117","Type":"ContainerStarted","Data":"614ee2002a75d6767f8e7c9e2e61360d9d5634b79bcdff3e785ae86a4ca4784f"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.648916 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.650423 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.150404396 +0000 UTC m=+122.319182926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.664306 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" event={"ID":"45b3a05c-a4a6-4e67-9c8f-c914c93cb801","Type":"ContainerStarted","Data":"6accd21ea1a6aea1f1180aaa76aba5788b55ce9fe6f0b7abce3037f0ddd5e615"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.724718 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-l96rs" event={"ID":"3b4463ed-eba2-4ba4-afb8-2424e957fc37","Type":"ContainerStarted","Data":"cd8c9fbae6d4c0be2c484010ceebdccef1db13489561034656c019cfeef3118d"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.735634 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.739989 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.742625 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.759505 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.760113 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.260088305 +0000 UTC m=+122.428866885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.761725 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" event={"ID":"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf","Type":"ContainerStarted","Data":"75164ea9a8f551d0afa06a4acb1db1e5d2a11d5cf9890414d91b7fac237bc02f"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.765795 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" event={"ID":"b967aa59-3ad8-4a80-a870-970c4166dd31","Type":"ContainerStarted","Data":"8d9d568901e811390357ab7a382f52584a24353ac4bdff85a472110157eb50ec"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.767955 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerStarted","Data":"11a76b2995d1e7821d8b5caa00d0b12a5012c7b092dc0a7b36b27b7457c6f577"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.775213 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" event={"ID":"f41303d0-06e3-4554-8fa9-d9dd935d0bec","Type":"ContainerStarted","Data":"8fb0540e002139b2c25967b29a5873b95c522f39d23c3fcd90793835887d5721"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.776172 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.779584 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerStarted","Data":"ba9e935cd9dbcccba3373b56114fb5112e6bd4ddbcf850c03f77ef25fb786214"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.817162 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" event={"ID":"dd3b472c-53e1-402a-ad30-244ea317f0e1","Type":"ContainerStarted","Data":"052fb4a983594ca74b3c2bc30d9134a6df6bc99ff8ec5a84f95c27e0f435b3c3"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.837431 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ljj2s" event={"ID":"b9ac66ad-91ae-4ffd-b159-a7549ca71803","Type":"ContainerStarted","Data":"1cf63a2b40982fc4b23ed671e18ed561146cde92c64687e450d593f1dc96d6ee"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.839254 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tzrjx"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.856738 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ttcl5"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.859123 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" event={"ID":"603cfb78-063c-444d-8434-38e8ff6b5f70","Type":"ContainerStarted","Data":"ea10f8ee9b6eace2f54e544b5c883889c4598fce326bb396b8ef1d49b04cbd33"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.860689 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.860946 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.360919367 +0000 UTC m=+122.529697897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.862478 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.863165 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.363150128 +0000 UTC m=+122.531928658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.869745 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" event={"ID":"0386fc07-a367-4188-8fab-3ce5d14ad6f2","Type":"ContainerStarted","Data":"cbd08ab0a2c4c0818dcbd527faa0be5b5f4a1bad92f6532575218bb39ed5a760"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.882586 5115 generic.go:358] "Generic (PLEG): container finished" podID="72f63421-cfe9-45f8-85fe-b779a81a7ebb" containerID="09e7fbda6c3e08fc45d4926c3ac4784e0e44c9fd8ef813f3b805e0113141078f" exitCode=0 Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.882665 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerDied","Data":"09e7fbda6c3e08fc45d4926c3ac4784e0e44c9fd8ef813f3b805e0113141078f"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.887617 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" event={"ID":"c6f108d0-ed4b-4318-bd96-7de2824bf73e","Type":"ContainerStarted","Data":"c895c4a8b8266caaaf889d03a2ee164cf3d7cff1e696bc8858d256b77c671370"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.897302 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" event={"ID":"21e183fd-a881-4f61-a726-bcaaf60e71d5","Type":"ContainerStarted","Data":"a35b70628ae9545bde82275cb2462476256b0d2876d7d3b3a4fc47c22ba825ab"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.905803 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" event={"ID":"d702c0ea-d2bd-41dc-9a3a-39caacbb288d","Type":"ContainerStarted","Data":"92f7e8dc1afc55c246a3b6503fab8e7d7e7733acdb5d01763bcda6166ac74ec1"} Jan 20 09:10:11 crc kubenswrapper[5115]: W0120 09:10:11.907632 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecb1b469_4758_499e_a0ba_8204058552be.slice/crio-51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5 WatchSource:0}: Error finding container 51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5: Status 404 returned error can't find the container with id 51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5 Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.912131 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerStarted","Data":"41ea8c623ecacb84e93a0bb70429c6d21f2263332366f0ca16d5017167557e81"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.917564 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5"] Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.933139 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" event={"ID":"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec","Type":"ContainerStarted","Data":"1f74e0c9554f8634c1b9f22b5a231966e157c8f60d4d46c7d458fa599c04679a"} Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.963481 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.963731 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.463695902 +0000 UTC m=+122.632474432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:11 crc kubenswrapper[5115]: I0120 09:10:11.967195 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:11 crc kubenswrapper[5115]: E0120 09:10:11.969834 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.469814786 +0000 UTC m=+122.638593306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: W0120 09:10:11.997482 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d8f5093_1a2e_4c32_8c74_b6cfb185cc99.slice/crio-884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf WatchSource:0}: Error finding container 884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf: Status 404 returned error can't find the container with id 884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.027922 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ft42n"] Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.072446 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.072989 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.57295436 +0000 UTC m=+122.741732890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.175993 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.176277 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.676265388 +0000 UTC m=+122.845043918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.219059 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mg52n" podStartSLOduration=7.219029774 podStartE2EDuration="7.219029774s" podCreationTimestamp="2026-01-20 09:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.170779232 +0000 UTC m=+122.339557762" watchObservedRunningTime="2026-01-20 09:10:12.219029774 +0000 UTC m=+122.387808304" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.273065 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" podStartSLOduration=101.273038472 podStartE2EDuration="1m41.273038472s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.206316294 +0000 UTC m=+122.375094824" watchObservedRunningTime="2026-01-20 09:10:12.273038472 +0000 UTC m=+122.441816992" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.295887 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.296398 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.796372767 +0000 UTC m=+122.965151297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.341156 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bxhkt" podStartSLOduration=101.341135916 podStartE2EDuration="1m41.341135916s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.27334496 +0000 UTC m=+122.442123490" watchObservedRunningTime="2026-01-20 09:10:12.341135916 +0000 UTC m=+122.509914446" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.405102 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" podStartSLOduration=101.405085121 podStartE2EDuration="1m41.405085121s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.401842504 +0000 UTC m=+122.570621034" watchObservedRunningTime="2026-01-20 09:10:12.405085121 +0000 UTC m=+122.573863651" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.408951 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.409290 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:12.909276373 +0000 UTC m=+123.078054903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.432703 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podStartSLOduration=6.43267945 podStartE2EDuration="6.43267945s" podCreationTimestamp="2026-01-20 09:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:12.432500145 +0000 UTC m=+122.601278675" watchObservedRunningTime="2026-01-20 09:10:12.43267945 +0000 UTC m=+122.601457980" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.510347 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.510837 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.010808253 +0000 UTC m=+123.179586783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.615087 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.615591 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.115571681 +0000 UTC m=+123.284350211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.716887 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.717048 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.21701558 +0000 UTC m=+123.385794110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.717503 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.718163 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.21815152 +0000 UTC m=+123.386930050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.821440 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.821791 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.321771567 +0000 UTC m=+123.490550097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.923201 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:12 crc kubenswrapper[5115]: E0120 09:10:12.923681 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.423658947 +0000 UTC m=+123.592437477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.946210 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-5494g" event={"ID":"09bd4dd7-1e94-43db-b3bc-fe2dd530d8ec","Type":"ContainerStarted","Data":"c6fd3bff44fe50a0b58401d9b3c0bf164f6c001d24ee2c0d62551ade272e9815"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.964514 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" event={"ID":"ecb1b469-4758-499e-a0ba-8204058552be","Type":"ContainerStarted","Data":"51cdc552eef228e01fec754efe08ca4499bd477430d44d8476a0d6a72e8158c5"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.968940 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerStarted","Data":"08844f14a2be2524b67d25e6d9e317be36bfd5bc9b4b4cda240955fd50dbb961"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.973685 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" event={"ID":"ef29fedc-43ad-4cf5-b3ef-10a28c46842f","Type":"ContainerStarted","Data":"5f14158e429c6f169c167efc97ae7ee8cb13e746c4dee1db68d688c231a5e7e8"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.985653 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerStarted","Data":"883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.987563 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.994504 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ljj2s" event={"ID":"b9ac66ad-91ae-4ffd-b159-a7549ca71803","Type":"ContainerStarted","Data":"ceaf8b77d0526829ab984bb0b3daa675f7bb0100da4f269637f721e655cd2360"} Jan 20 09:10:12 crc kubenswrapper[5115]: I0120 09:10:12.994845 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.002037 5115 generic.go:358] "Generic (PLEG): container finished" podID="0386fc07-a367-4188-8fab-3ce5d14ad6f2" containerID="becbfe546f7a2e1bb8cfdb84a57c1179541310157b00eb6f1280ed8ef84bf6c9" exitCode=0 Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.002224 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" event={"ID":"0386fc07-a367-4188-8fab-3ce5d14ad6f2","Type":"ContainerDied","Data":"becbfe546f7a2e1bb8cfdb84a57c1179541310157b00eb6f1280ed8ef84bf6c9"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.016838 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" event={"ID":"f6477423-4b0a-43d7-9514-bde25388af77","Type":"ContainerStarted","Data":"aa1f902fe6f5d74d02915fedddde26e938ccda6a6fc790c74302819840debc56"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.025158 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.025805 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.525757203 +0000 UTC m=+123.694535733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.028812 5115 patch_prober.go:28] interesting pod/downloads-747b44746d-ljj2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.029678 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ljj2s" podUID="b9ac66ad-91ae-4ffd-b159-a7549ca71803" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.033094 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"3600dba173d1c61d6f6ab695b5a5c43e3072abb0d351f95623aa429868705043"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.050952 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" podStartSLOduration=102.050935308 podStartE2EDuration="1m42.050935308s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.00806513 +0000 UTC m=+123.176843670" watchObservedRunningTime="2026-01-20 09:10:13.050935308 +0000 UTC m=+123.219713838" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.070845 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-ljj2s" podStartSLOduration=102.070826101 podStartE2EDuration="1m42.070826101s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.068731115 +0000 UTC m=+123.237509635" watchObservedRunningTime="2026-01-20 09:10:13.070826101 +0000 UTC m=+123.239604631" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.089312 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" event={"ID":"0d738dd6-3c15-4131-837d-591792cb41cd","Type":"ContainerStarted","Data":"fba3baf48c6183de048f0ec7d86881b0b0b8a0f79ebc580960b93f498caf9bee"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.113549 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerStarted","Data":"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.115169 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.129602 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.129947 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.629917595 +0000 UTC m=+123.798696125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.139363 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4216ee5471b2d0b2c75950445b5235e7a9fbc11060878d69eea5c4d59ae91980"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.155109 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" event={"ID":"01855721-bd0b-4ddc-91d0-be658345b9c5","Type":"ContainerStarted","Data":"c6c3b97da8685ad26a30368e912e4bd3bef88b40806986a26910beaaa8f0a9fb"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.158100 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" podStartSLOduration=102.15806995 podStartE2EDuration="1m42.15806995s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.155548142 +0000 UTC m=+123.324326672" watchObservedRunningTime="2026-01-20 09:10:13.15806995 +0000 UTC m=+123.326848480" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.193137 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerStarted","Data":"5fb596da1738dbe8416b2b3a595dc262a4288da61aa3303a2ea6eb0db0479d63"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.210406 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" event={"ID":"8c6ba355-2c21-431c-8767-821fb9075e1c","Type":"ContainerStarted","Data":"bfb694ead5c0258216ff138837d8130845e4622fc01c854a8d52dd93bbdfcdbc"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.211504 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.231092 5115 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-95nt8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.231207 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" podUID="8c6ba355-2c21-431c-8767-821fb9075e1c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.232199 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.233857 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.73383362 +0000 UTC m=+123.902612150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.248825 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"d8310b84dc2c03d782dfa8f7355270550f4eccaa51192ceb47d2554a222451c1"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.263646 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" podStartSLOduration=102.263617848 podStartE2EDuration="1m42.263617848s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.261254044 +0000 UTC m=+123.430032574" watchObservedRunningTime="2026-01-20 09:10:13.263617848 +0000 UTC m=+123.432396398" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.283518 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-glkw9" event={"ID":"26f7f00b-d69c-4a82-934c-025eb1500a33","Type":"ContainerStarted","Data":"274a43812679da83ec8291c1b5343bdabb2bf7b42438e001c846d085d841b5cd"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.285070 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.308290 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ft42n" event={"ID":"fbc48af4-261d-4599-a7fd-edd26b2b4022","Type":"ContainerStarted","Data":"98f7d0441adb463cff7325f8b7fc2b1e1ae932d02f57de866d8c426324363283"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.316423 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-l96rs" event={"ID":"3b4463ed-eba2-4ba4-afb8-2424e957fc37","Type":"ContainerStarted","Data":"f6c068df1f75021aa18603756618ed617463b2c511d0a4369a1370bafb29a458"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.335504 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" event={"ID":"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99","Type":"ContainerStarted","Data":"884539935bb1f8878042308d9999e84e1a2eef356f095222edb348b6b1199abf"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.336758 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.339493 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.83947316 +0000 UTC m=+124.008251690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.349825 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" event={"ID":"118decd3-a665-4997-bd40-0f68d2295238","Type":"ContainerStarted","Data":"679bdfff9044d5b0da2632379142bdbb12d8f1e8613651726a7bfe0ea19fea0e"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.351993 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" event={"ID":"b967aa59-3ad8-4a80-a870-970c4166dd31","Type":"ContainerStarted","Data":"7e302e1c59b3ab2f846eedc21557860d9acba1a085af62ab18debb1b64309de0"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.367333 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerStarted","Data":"875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.368735 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.373122 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-l96rs" podStartSLOduration=102.373098251 podStartE2EDuration="1m42.373098251s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.37303199 +0000 UTC m=+123.541810520" watchObservedRunningTime="2026-01-20 09:10:13.373098251 +0000 UTC m=+123.541876781" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.379571 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-glkw9" podStartSLOduration=102.379564265 podStartE2EDuration="1m42.379564265s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.339329327 +0000 UTC m=+123.508107857" watchObservedRunningTime="2026-01-20 09:10:13.379564265 +0000 UTC m=+123.548342795" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.391473 5115 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9gfdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.391556 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.392039 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" event={"ID":"dd3b472c-53e1-402a-ad30-244ea317f0e1","Type":"ContainerStarted","Data":"66f82900b831f33022203ecf089c4daa28d84b6dd6f7ef70e57a1d524225d69d"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.412012 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-pss2p" event={"ID":"603cfb78-063c-444d-8434-38e8ff6b5f70","Type":"ContainerStarted","Data":"017b835d494bf2f06496ea1392bd823f965eb975bf926493be6531367ca0aee4"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.423473 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"2e85b747ec70b384b615e5bce3ac0531dcd9c919954dd52eee1a50c51619135f"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.433981 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podStartSLOduration=102.42603817 podStartE2EDuration="1m42.42603817s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.408019927 +0000 UTC m=+123.576798457" watchObservedRunningTime="2026-01-20 09:10:13.42603817 +0000 UTC m=+123.594816700" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.435439 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-s85qm" podStartSLOduration=102.435421082 podStartE2EDuration="1m42.435421082s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.42415108 +0000 UTC m=+123.592929610" watchObservedRunningTime="2026-01-20 09:10:13.435421082 +0000 UTC m=+123.604199612" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.438445 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.439267 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:13.939251674 +0000 UTC m=+124.108030204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.445199 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" event={"ID":"31a102f9-d392-481f-85f7-4be9117cd31d","Type":"ContainerStarted","Data":"4e36671eb92c313415eff2616557fe69414813757951555ee8cd7b78adb01ea2"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.523934 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" event={"ID":"c6f108d0-ed4b-4318-bd96-7de2824bf73e","Type":"ContainerStarted","Data":"cf34808341ae10d73a36b6ee114824a2e212ee1211ead8c79b96024001089d11"} Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.541964 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.543397 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.043376065 +0000 UTC m=+124.212154595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.645521 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.647190 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.147144546 +0000 UTC m=+124.315923256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.661681 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.697193 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-2vzsk" podStartSLOduration=102.697165696 podStartE2EDuration="1m42.697165696s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:13.555261713 +0000 UTC m=+123.724040243" watchObservedRunningTime="2026-01-20 09:10:13.697165696 +0000 UTC m=+123.865944226" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.700671 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.716522 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-glkw9" Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.746862 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.747266 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.247252738 +0000 UTC m=+124.416031268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.849619 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.850179 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.350155046 +0000 UTC m=+124.518933576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.954754 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:13 crc kubenswrapper[5115]: E0120 09:10:13.955475 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.455448158 +0000 UTC m=+124.624226688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:13 crc kubenswrapper[5115]: I0120 09:10:13.956516 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.068563 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.068915 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.568884077 +0000 UTC m=+124.737662607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.170961 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.171476 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.671454126 +0000 UTC m=+124.840232656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.271976 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.272159 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.772129274 +0000 UTC m=+124.940907804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.274879 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.275487 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.775468824 +0000 UTC m=+124.944247354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.364504 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55222: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.376708 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.377144 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.877120948 +0000 UTC m=+125.045899478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.423116 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55236: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.471108 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55240: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.478241 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.478754 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:14.978731061 +0000 UTC m=+125.147509591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.531069 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" event={"ID":"f7ec9898-6747-40af-be60-ce1289d0a4e6","Type":"ContainerStarted","Data":"b11fd6a2306cd411f0028b604e66a34704528f4af33e91a10d60a2bc82ede027"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.549501 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55248: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.570776 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" event={"ID":"ac548cbe-da92-4dd6-bd33-705689710018","Type":"ContainerStarted","Data":"32c3c0ddd37a60d9857ab678812ca272a27fc659113c02ea581fe79c776141f2"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.572112 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-69gcn" podStartSLOduration=103.572075723 podStartE2EDuration="1m43.572075723s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:14.57124483 +0000 UTC m=+124.740023360" watchObservedRunningTime="2026-01-20 09:10:14.572075723 +0000 UTC m=+124.740854253" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.582974 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.583592 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.08356352 +0000 UTC m=+125.252342050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.704707 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.706216 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.206195436 +0000 UTC m=+125.374973966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.706274 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55258: no serving certificate available for the kubelet" Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.707841 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" event={"ID":"45b3a05c-a4a6-4e67-9c8f-c914c93cb801","Type":"ContainerStarted","Data":"1f08c0c08d46d56f5abfe3753b6bcdc1fa6d98aa4e44d81f7028c4bb52620059"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.734443 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" event={"ID":"118decd3-a665-4997-bd40-0f68d2295238","Type":"ContainerStarted","Data":"30197d2a8eba478c1cc1760f61a1263e6e709d83f8f8ebb93f86731179299136"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.736754 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.819661 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.821060 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.321027704 +0000 UTC m=+125.489806234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.921047 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:14 crc kubenswrapper[5115]: E0120 09:10:14.921425 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.421409784 +0000 UTC m=+125.590188314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.949988 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" event={"ID":"f41303d0-06e3-4554-8fa9-d9dd935d0bec","Type":"ContainerStarted","Data":"fdd3efc8c732127419bdb406d5c956bac0291772cb27fcf0bbd4840987a64dea"} Jan 20 09:10:14 crc kubenswrapper[5115]: I0120 09:10:14.972606 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55260: no serving certificate available for the kubelet" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.008333 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-6lm7w" podStartSLOduration=104.008304753 podStartE2EDuration="1m44.008304753s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:14.77125345 +0000 UTC m=+124.940031980" watchObservedRunningTime="2026-01-20 09:10:15.008304753 +0000 UTC m=+125.177083283" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.010538 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-9hn8c" podStartSLOduration=104.010525822 podStartE2EDuration="1m44.010525822s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.00971423 +0000 UTC m=+125.178492760" watchObservedRunningTime="2026-01-20 09:10:15.010525822 +0000 UTC m=+125.179304352" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.026832 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.029192 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.529165692 +0000 UTC m=+125.697944222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.069330 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"85dcaade86f063dfd07dd8dd3838242dadcb7141d2e72c4d65bbea6d3df32cc6"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.069485 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.108361 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" event={"ID":"31a102f9-d392-481f-85f7-4be9117cd31d","Type":"ContainerStarted","Data":"ec20619374fc34db263286efbddbbf170e4ab13a8140da93ce8880910ca82771"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.123725 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" event={"ID":"21e183fd-a881-4f61-a726-bcaaf60e71d5","Type":"ContainerStarted","Data":"5b58a7173f5625d704260f3fd29fb7f952ca76d2e1fc3bf8c886b66d46366673"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.129083 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.135788 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.635747738 +0000 UTC m=+125.804526268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.146358 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" event={"ID":"d702c0ea-d2bd-41dc-9a3a-39caacbb288d","Type":"ContainerStarted","Data":"b966c232a6bba908a3cb408998b20eed2f0f64eb633e9680aa989c6a554d0a4c"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.147288 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.177288 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" event={"ID":"676675d9-dafb-4b30-ad88-bea33cf42ce0","Type":"ContainerStarted","Data":"94af2353f43c3f516b2f7b438b2db2e94e583cd7806c99cb9e1149867eab6b39"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.193971 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerStarted","Data":"91b474462e17975b2a2291c38c1eb2339450031fdea7fbcff486b36751736b0a"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.195131 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-rjtnv" podStartSLOduration=104.195115769 podStartE2EDuration="1m44.195115769s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.16755673 +0000 UTC m=+125.336335260" watchObservedRunningTime="2026-01-20 09:10:15.195115769 +0000 UTC m=+125.363894299" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.202375 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" podStartSLOduration=104.202348633 podStartE2EDuration="1m44.202348633s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.197219625 +0000 UTC m=+125.365998155" watchObservedRunningTime="2026-01-20 09:10:15.202348633 +0000 UTC m=+125.371127163" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.226751 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" event={"ID":"10472dc9-9bed-4d08-811a-76a55f0d6cf4","Type":"ContainerStarted","Data":"fbeb17a228eed5edc217c90401e742e5b0c7e29b5cc6b24113e772348f8e37d9"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.230556 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.231814 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.731783352 +0000 UTC m=+125.900562062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.243250 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55270: no serving certificate available for the kubelet" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.243816 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" event={"ID":"6008f0e6-56c0-4fdd-89b8-0649fb365b0f","Type":"ContainerStarted","Data":"db58d0f502123e7bc044ec581bb2c8cb19c4c3d370def9804c1d6afe2300fc8e"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.256546 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" event={"ID":"ef29fedc-43ad-4cf5-b3ef-10a28c46842f","Type":"ContainerStarted","Data":"92f21792d1cd5d81e606078b9ae4b9cf5f3e41142ad1cfaa99ff73710e2b0061"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.284310 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-78z8z" event={"ID":"9aa837bd-63fc-4bb8-b158-d8632117a117","Type":"ContainerStarted","Data":"9d839f302bc858643c72edff27530af9683871acfdb2cc7ee62888ae0dec2fcf"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.331999 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.333434 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.833410365 +0000 UTC m=+126.002188895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.339337 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" event={"ID":"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf","Type":"ContainerStarted","Data":"fd7a42fa72c3427ad9620ef2052c0caea8c21b1957ce99460e6432583c26bcfa"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.381423 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-777zr" podStartSLOduration=104.381404172 podStartE2EDuration="1m44.381404172s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.291635735 +0000 UTC m=+125.460414265" watchObservedRunningTime="2026-01-20 09:10:15.381404172 +0000 UTC m=+125.550182702" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.384072 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" event={"ID":"f6477423-4b0a-43d7-9514-bde25388af77","Type":"ContainerStarted","Data":"2f6ba41ee6db11c7a43d43c2a79a711e54bad73e5e177b4c795f496c28b34516"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.416167 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" event={"ID":"0d738dd6-3c15-4131-837d-591792cb41cd","Type":"ContainerStarted","Data":"61ff3e55fc40df5d3c04cbeadd387ff02ac73b5771b6bc7863af5b8efb3e98f4"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.435176 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.435547 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.935506101 +0000 UTC m=+126.104284631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.435859 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.438258 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:15.938240784 +0000 UTC m=+126.107019314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.449817 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"a3c8492358b1e17a5b01ad3bdd46cc8aced54f44c93d0f320092b1db7b32253d"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.466356 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" podStartSLOduration=104.466333727 podStartE2EDuration="1m44.466333727s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.400423081 +0000 UTC m=+125.569201611" watchObservedRunningTime="2026-01-20 09:10:15.466333727 +0000 UTC m=+125.635112257" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.472529 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7ntwm" podStartSLOduration=104.472508693 podStartE2EDuration="1m44.472508693s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.470416626 +0000 UTC m=+125.639195156" watchObservedRunningTime="2026-01-20 09:10:15.472508693 +0000 UTC m=+125.641287223" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.532259 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-78z8z" podStartSLOduration=104.532237563 podStartE2EDuration="1m44.532237563s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.530006124 +0000 UTC m=+125.698784654" watchObservedRunningTime="2026-01-20 09:10:15.532237563 +0000 UTC m=+125.701016093" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.539697 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.540046 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.040019902 +0000 UTC m=+126.208798432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.563925 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" event={"ID":"01855721-bd0b-4ddc-91d0-be658345b9c5","Type":"ContainerStarted","Data":"e7d503df6c400b952a9f18f7d520f8669cddaf0336429554e035288fbb861dad"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.567147 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" podStartSLOduration=104.567134228 podStartE2EDuration="1m44.567134228s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.566333277 +0000 UTC m=+125.735111807" watchObservedRunningTime="2026-01-20 09:10:15.567134228 +0000 UTC m=+125.735912758" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.594506 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" event={"ID":"80f8b6d4-7eb4-42ec-9976-60dc6db3148f","Type":"ContainerStarted","Data":"a759b846b0f2fd42a045e7a86fb6f4efd76c300ac821a2741955c8437c88cf9e"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.595820 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.642648 5115 generic.go:358] "Generic (PLEG): container finished" podID="3b28944b-12d3-4087-b906-99fbf2937724" containerID="734e1601652462f7bd82995e493ba0a72c74f78c5482c86ba0be7444bba17e45" exitCode=0 Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.642814 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" event={"ID":"3b28944b-12d3-4087-b906-99fbf2937724","Type":"ContainerDied","Data":"734e1601652462f7bd82995e493ba0a72c74f78c5482c86ba0be7444bba17e45"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.647626 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.648236 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.649196 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.149181657 +0000 UTC m=+126.317960187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.679928 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-2pl95" podStartSLOduration=104.67988527 podStartE2EDuration="1m44.67988527s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.649881286 +0000 UTC m=+125.818659816" watchObservedRunningTime="2026-01-20 09:10:15.67988527 +0000 UTC m=+125.848663800" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.680331 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerStarted","Data":"b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.681311 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.700147 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55274: no serving certificate available for the kubelet" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.701139 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-h9rh5" podStartSLOduration=104.701108749 podStartE2EDuration="1m44.701108749s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.689324863 +0000 UTC m=+125.858103393" watchObservedRunningTime="2026-01-20 09:10:15.701108749 +0000 UTC m=+125.869887279" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.725845 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-59xcc" event={"ID":"d60eae6f-6fe4-41cd-8c8f-54749aacc87e","Type":"ContainerStarted","Data":"593185d573871990da6dc3a956cd8bd9ff1270503cdef92e2909a86f8647f48f"} Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.733763 5115 patch_prober.go:28] interesting pod/downloads-747b44746d-ljj2s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.733849 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ljj2s" podUID="b9ac66ad-91ae-4ffd-b159-a7549ca71803" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.734992 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" podStartSLOduration=104.734887534 podStartE2EDuration="1m44.734887534s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.734216936 +0000 UTC m=+125.902995466" watchObservedRunningTime="2026-01-20 09:10:15.734887534 +0000 UTC m=+125.903666064" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.735755 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.757782 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:15 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:15 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:15 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.757832 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.758150 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.758414 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.758640 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.25862046 +0000 UTC m=+126.427398990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.758819 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.759218 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.259206685 +0000 UTC m=+126.427985215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.793497 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-95nt8" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.858670 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" podStartSLOduration=104.85865203 podStartE2EDuration="1m44.85865203s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.855625579 +0000 UTC m=+126.024404099" watchObservedRunningTime="2026-01-20 09:10:15.85865203 +0000 UTC m=+126.027430550" Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.862726 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.864500 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.364477557 +0000 UTC m=+126.533256077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.965887 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:15 crc kubenswrapper[5115]: E0120 09:10:15.966421 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.466403448 +0000 UTC m=+126.635181978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:15 crc kubenswrapper[5115]: I0120 09:10:15.991880 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-mfd49" podStartSLOduration=104.991853481 podStartE2EDuration="1m44.991853481s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:15.927169386 +0000 UTC m=+126.095947916" watchObservedRunningTime="2026-01-20 09:10:15.991853481 +0000 UTC m=+126.160632011" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.034492 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podStartSLOduration=105.034471732 podStartE2EDuration="1m45.034471732s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.033476705 +0000 UTC m=+126.202255235" watchObservedRunningTime="2026-01-20 09:10:16.034471732 +0000 UTC m=+126.203250262" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.070009 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.070253 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.570232391 +0000 UTC m=+126.739010921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.152005 5115 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-smr5d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.152095 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" podUID="d702c0ea-d2bd-41dc-9a3a-39caacbb288d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.171711 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.172110 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.67209674 +0000 UTC m=+126.840875270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.247045 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.276684 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.776656963 +0000 UTC m=+126.945435493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.276552 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.277726 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.278093 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.7780733 +0000 UTC m=+126.946851830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.320328 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.328845 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.334799 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.338450 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387541 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387841 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387917 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.387957 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.388079 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.888057958 +0000 UTC m=+127.056836488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.427389 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55278: no serving certificate available for the kubelet" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489037 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489082 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489235 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489278 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.489741 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.490068 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.490132 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:16.990115522 +0000 UTC m=+127.158894052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.532241 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.538778 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"community-operators-2dlnj\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.545885 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.546115 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.566920 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.592833 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.593031 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.09299945 +0000 UTC m=+127.261777980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593198 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593307 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593495 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.593578 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.594002 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.093988277 +0000 UTC m=+127.262766807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.667399 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.694565 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.694881 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.695077 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.195039635 +0000 UTC m=+127.363818165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.695487 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.695578 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.695694 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.696614 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.706383 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.714081 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.734693 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"certified-operators-mrnvw\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.735956 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.753251 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:16 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:16 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:16 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.753326 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.806553 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-59xcc" event={"ID":"d60eae6f-6fe4-41cd-8c8f-54749aacc87e","Type":"ContainerStarted","Data":"e0f1eefe6ad27b2c5be50e40392f96025b89fb3e134d9e85311a28f373496130"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.806788 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807758 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807830 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807888 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.807955 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.809469 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.309446391 +0000 UTC m=+127.478224911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.823503 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" event={"ID":"ac548cbe-da92-4dd6-bd33-705689710018","Type":"ContainerStarted","Data":"3e31bf11180906f7b330777064934706cfb0c8c4a18f718f32ff9e3a8b0b8448"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.832392 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ft42n" event={"ID":"fbc48af4-261d-4599-a7fd-edd26b2b4022","Type":"ContainerStarted","Data":"5bf3f0836e17df2b3ed3402a2d5fbfb042d3679bf98612c415ec5630cc23305e"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.838496 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-59xcc" podStartSLOduration=10.838471118 podStartE2EDuration="10.838471118s" podCreationTimestamp="2026-01-20 09:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.83666146 +0000 UTC m=+127.005439990" watchObservedRunningTime="2026-01-20 09:10:16.838471118 +0000 UTC m=+127.007249648" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.848639 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" event={"ID":"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99","Type":"ContainerStarted","Data":"d695445bedef5f16dfd39f8315a548a1726ead3a0d76056cf7bc7035efb0c47a"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.848719 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tzrjx" event={"ID":"3d8f5093-1a2e-4c32-8c74-b6cfb185cc99","Type":"ContainerStarted","Data":"03e85f1250a159644682d8c2988a07c749e0197930f9f9a9280d8cc1cb25fe8c"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.857268 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.860263 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" event={"ID":"118decd3-a665-4997-bd40-0f68d2295238","Type":"ContainerStarted","Data":"208f62729a2edf66180ea82cd91b6e6bc5090360ae7cd4eef33cf055d1f09245"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.883270 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-8622t" podStartSLOduration=105.883247488 podStartE2EDuration="1m45.883247488s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.882596321 +0000 UTC m=+127.051374851" watchObservedRunningTime="2026-01-20 09:10:16.883247488 +0000 UTC m=+127.052026018" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.888254 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" event={"ID":"b967aa59-3ad8-4a80-a870-970c4166dd31","Type":"ContainerStarted","Data":"f4b2c38d2426bfd844921a4b04717cc1b1b784afe9d058de47d652b81bd68872"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.889393 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.910493 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912352 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912777 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912827 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.912909 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: E0120 09:10:16.913180 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.413122439 +0000 UTC m=+127.581900979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.913319 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.915528 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.935334 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" event={"ID":"31a102f9-d392-481f-85f7-4be9117cd31d","Type":"ContainerStarted","Data":"c719c8a450ed77e0000f58d23c7588dd7e5f8bb38a0115a9a2984f9aa9f5bbab"} Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.947403 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.965888 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:16 crc kubenswrapper[5115]: I0120 09:10:16.983453 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ft42n" podStartSLOduration=11.983429293 podStartE2EDuration="11.983429293s" podCreationTimestamp="2026-01-20 09:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:16.983119545 +0000 UTC m=+127.151898075" watchObservedRunningTime="2026-01-20 09:10:16.983429293 +0000 UTC m=+127.152207823" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.012824 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" event={"ID":"ecb1b469-4758-499e-a0ba-8204058552be","Type":"ContainerStarted","Data":"623ef71af5aaa936c2b34250ed6bfeabb18db8f3cd11fb770c90a6c98f43618f"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.013974 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.017363 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.517339922 +0000 UTC m=+127.686118442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.039002 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" event={"ID":"6008f0e6-56c0-4fdd-89b8-0649fb365b0f","Type":"ContainerStarted","Data":"f9734becf9da70049daba053ac14471c6d41b24eee9735cc9ae0bb10bf63500f"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.043543 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-5rdz6" event={"ID":"d2be688d-0ad1-4b7a-9c55-0f2b7500cfdf","Type":"ContainerStarted","Data":"580ba2b077fd50f319404f9b893158cc5f4bbdbcee8233b368fbf311b1e7dd7d"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.055601 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.092152 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"community-operators-cn6h9\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.114447 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tzrjx" podStartSLOduration=106.114417683 podStartE2EDuration="1m46.114417683s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.088610982 +0000 UTC m=+127.257389512" watchObservedRunningTime="2026-01-20 09:10:17.114417683 +0000 UTC m=+127.283196213" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.122111 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.123233 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" event={"ID":"0386fc07-a367-4188-8fab-3ce5d14ad6f2","Type":"ContainerStarted","Data":"5e106c2a534c2832eb7b6fe6cc406cf531006613c40446e80e9b15a58be900c0"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152197 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152563 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152638 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.152658 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.152779 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.652756471 +0000 UTC m=+127.821535001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.164093 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" podStartSLOduration=106.164070174 podStartE2EDuration="1m46.164070174s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.160824396 +0000 UTC m=+127.329602926" watchObservedRunningTime="2026-01-20 09:10:17.164070174 +0000 UTC m=+127.332848704" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.170004 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerStarted","Data":"e135243144f39f667f48060809952423e9baf250db9ce7fbeac18b53368c199e"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.170358 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" event={"ID":"72f63421-cfe9-45f8-85fe-b779a81a7ebb","Type":"ContainerStarted","Data":"aebedec17e42fd5419092403fcaf894225a0a1e0062fb7d78784967ec979f31d"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.178413 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xtwqk" event={"ID":"01855721-bd0b-4ddc-91d0-be658345b9c5","Type":"ContainerStarted","Data":"9f3bccd5b0f20ddbd7177017144088df3498ba8358f0134c9b7a7de81336524c"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.197253 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lcng5" podStartSLOduration=106.197225702 podStartE2EDuration="1m46.197225702s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.195939708 +0000 UTC m=+127.364718238" watchObservedRunningTime="2026-01-20 09:10:17.197225702 +0000 UTC m=+127.366004242" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.214173 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" event={"ID":"3b28944b-12d3-4087-b906-99fbf2937724","Type":"ContainerStarted","Data":"681c85f143fd196233b8af99153dc4afaefb32d23343907d3f47bcdc3bc17dc8"} Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.214230 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.227443 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" gracePeriod=30 Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.229308 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-smr5d" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.265272 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-m6g4t" podStartSLOduration=106.265251635 podStartE2EDuration="1m46.265251635s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.222411007 +0000 UTC m=+127.391189537" watchObservedRunningTime="2026-01-20 09:10:17.265251635 +0000 UTC m=+127.434030165" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.266527 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.266788 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.266843 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.267191 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.272986 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.285472 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.785452746 +0000 UTC m=+127.954231276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.291147 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.310007 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-m6krp" podStartSLOduration=106.309985694 podStartE2EDuration="1m46.309985694s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.265489122 +0000 UTC m=+127.434267652" watchObservedRunningTime="2026-01-20 09:10:17.309985694 +0000 UTC m=+127.478764224" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.312312 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" podStartSLOduration=106.312305057 podStartE2EDuration="1m46.312305057s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.309067429 +0000 UTC m=+127.477845959" watchObservedRunningTime="2026-01-20 09:10:17.312305057 +0000 UTC m=+127.481083577" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.323548 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"certified-operators-ln8lc\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.370773 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.371178 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.871156584 +0000 UTC m=+128.039935114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.396023 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-ztcgs" podStartSLOduration=106.395995709 podStartE2EDuration="1m46.395995709s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.345402993 +0000 UTC m=+127.514181523" watchObservedRunningTime="2026-01-20 09:10:17.395995709 +0000 UTC m=+127.564774229" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.401369 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" podStartSLOduration=106.401352202 podStartE2EDuration="1m46.401352202s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.379856237 +0000 UTC m=+127.548634757" watchObservedRunningTime="2026-01-20 09:10:17.401352202 +0000 UTC m=+127.570130752" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.428084 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" podStartSLOduration=106.428059458 podStartE2EDuration="1m46.428059458s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:17.423288261 +0000 UTC m=+127.592066791" watchObservedRunningTime="2026-01-20 09:10:17.428059458 +0000 UTC m=+127.596837998" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.476714 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.477210 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:17.977197035 +0000 UTC m=+128.145975565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.515378 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.532473 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.577783 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.578126 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.07810574 +0000 UTC m=+128.246884270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.620105 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.679467 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.680104 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.180068932 +0000 UTC m=+128.348847462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.738031 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.749840 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:17 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:17 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:17 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.749931 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.782192 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.782589 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.282563659 +0000 UTC m=+128.451342189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.821587 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55280: no serving certificate available for the kubelet" Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.884233 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.884666 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.384649624 +0000 UTC m=+128.553428154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.988597 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.989106 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.489068623 +0000 UTC m=+128.657847153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:17 crc kubenswrapper[5115]: I0120 09:10:17.989683 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:17 crc kubenswrapper[5115]: E0120 09:10:17.990055 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.490048339 +0000 UTC m=+128.658826859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.091095 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.091346 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.591306022 +0000 UTC m=+128.760084552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.091921 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.092314 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.592294799 +0000 UTC m=+128.761073329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.157714 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.193616 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.193876 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.69383961 +0000 UTC m=+128.862618140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224647 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerStarted","Data":"b623557fb8fa89838a7fffcb0c7e471eeaf77057e10e543a3504832324b27404"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224713 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerStarted","Data":"91ffd30d0b07fe8b71ba5e2b62abd0321e935c136baf579cb7b5b85fbfc8da21"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224731 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerStarted","Data":"ba3c29f3ff3951d423c587bfc54fde3036fb68c70ae8bcabcb0199b3d1a764a2"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.224743 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerStarted","Data":"092aa312ded9179826cf1c7718d79766d577bbc74bfdc3260b75b3acb73e6544"} Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.295996 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.296393 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.796377219 +0000 UTC m=+128.965155749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.303938 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.314252 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.333285 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.380232 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.401957 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.403207 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.90316584 +0000 UTC m=+129.071944480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408011 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408283 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408585 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.408718 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.414663 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:18.914641177 +0000 UTC m=+129.083419707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510395 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510584 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510640 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.510689 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.511660 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.511656 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.511760 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.011727059 +0000 UTC m=+129.180505579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.562121 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"redhat-marketplace-5plkc\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.612128 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.612700 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.112669194 +0000 UTC m=+129.281447724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.707238 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.713861 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.714036 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.21399822 +0000 UTC m=+129.382776750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.714517 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.715113 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.215089659 +0000 UTC m=+129.383868189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.716817 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.718151 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.734139 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:18 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:18 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:18 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.734235 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.815859 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.816094 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.816145 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.816226 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.816367 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.316340942 +0000 UTC m=+129.485119472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.852346 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918081 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918135 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918159 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.918208 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.919207 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.919427 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:18 crc kubenswrapper[5115]: E0120 09:10:18.919522 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.419507267 +0000 UTC m=+129.588285797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:18 crc kubenswrapper[5115]: I0120 09:10:18.963971 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"redhat-marketplace-b5s99\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.020160 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.020416 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.5203736 +0000 UTC m=+129.689152130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.020854 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.021034 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.021488 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.521468539 +0000 UTC m=+129.690247069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.029520 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.032457 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.034836 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.039771 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.115783 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122495 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.122679 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.622649101 +0000 UTC m=+129.791427631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122804 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122842 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.122931 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.123309 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.623299108 +0000 UTC m=+129.792077638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: W0120 09:10:19.124288 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d4e242_d348_4f3f_8453_612b19e41f3a.slice/crio-50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944 WatchSource:0}: Error finding container 50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944: Status 404 returned error can't find the container with id 50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.137216 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.223829 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.224032 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.224065 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.224349 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.724327255 +0000 UTC m=+129.893105785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.224397 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.245857 5115 generic.go:358] "Generic (PLEG): container finished" podID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerID="91b474462e17975b2a2291c38c1eb2339450031fdea7fbcff486b36751736b0a" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.246080 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerDied","Data":"91b474462e17975b2a2291c38c1eb2339450031fdea7fbcff486b36751736b0a"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.249983 5115 generic.go:358] "Generic (PLEG): container finished" podID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.250122 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.251690 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.256872 5115 generic.go:358] "Generic (PLEG): container finished" podID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.256967 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.280161 5115 generic.go:358] "Generic (PLEG): container finished" podID="098c57a3-a775-41d0-b528-6833df51eb70" containerID="f88e943d46c00e03b49000272db95a963fb31d5df3dc7dea80bbd32f957cb111" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.280926 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"f88e943d46c00e03b49000272db95a963fb31d5df3dc7dea80bbd32f957cb111"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.293267 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerStarted","Data":"50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.308810 5115 generic.go:358] "Generic (PLEG): container finished" podID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerID="06668f7c92efbf93f8c0b42e46d251a0aadb5b80b4c08ce779cc27955ee5a124" exitCode=0 Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.310646 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"06668f7c92efbf93f8c0b42e46d251a0aadb5b80b4c08ce779cc27955ee5a124"} Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.330574 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.331003 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.830988604 +0000 UTC m=+129.999767134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.366855 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.431825 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.432050 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:19.932007652 +0000 UTC m=+130.100786182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.445622 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.446143 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.460694 5115 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-xn6qp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]log ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]etcd ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/max-in-flight-filter ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 20 09:10:19 crc kubenswrapper[5115]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/project.openshift.io-projectcache ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/openshift.io-startinformers ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 20 09:10:19 crc kubenswrapper[5115]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 09:10:19 crc kubenswrapper[5115]: livez check failed Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.460994 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" podUID="72f63421-cfe9-45f8-85fe-b779a81a7ebb" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.489657 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.489755 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.501355 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.510484 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.510663 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.513166 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.517639 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.523223 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:10:19 crc kubenswrapper[5115]: W0120 09:10:19.527136 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b758f72_1c19_45ea_8f26_580952f254a6.slice/crio-d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a WatchSource:0}: Error finding container d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a: Status 404 returned error can't find the container with id d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.531612 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.533943 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.534884 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.535960 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.035945287 +0000 UTC m=+130.204723817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585190 5115 patch_prober.go:28] interesting pod/console-64d44f6ddf-78z8z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585215 5115 patch_prober.go:28] interesting pod/downloads-747b44746d-ljj2s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585261 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-78z8z" podUID="9aa837bd-63fc-4bb8-b158-d8632117a117" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.585308 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ljj2s" podUID="b9ac66ad-91ae-4ffd-b159-a7549ca71803" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637339 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637523 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637679 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.637782 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.638857 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.138825963 +0000 UTC m=+130.307604493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.737232 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:19 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:19 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:19 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.737359 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.742815 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.743569 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.746839 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.744296 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.747434 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.747539 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.747634 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.748306 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.248281736 +0000 UTC m=+130.417060266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.772327 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"redhat-operators-45pv6\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.829499 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.850184 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.850623 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.350601959 +0000 UTC m=+130.519380489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.919553 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.927944 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.937302 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:10:19 crc kubenswrapper[5115]: I0120 09:10:19.956502 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:19 crc kubenswrapper[5115]: E0120 09:10:19.956946 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.456930238 +0000 UTC m=+130.625708768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058161 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058765 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058802 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.058833 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.059032 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.558981833 +0000 UTC m=+130.727760373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.139366 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:10:20 crc kubenswrapper[5115]: W0120 09:10:20.151079 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57355d9d_a14f_4cf0_8a63_842b27765063.slice/crio-2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017 WatchSource:0}: Error finding container 2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017: Status 404 returned error can't find the container with id 2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017 Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.159866 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160052 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160087 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160169 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.160483 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.660468203 +0000 UTC m=+130.829246733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160607 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.160634 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.186169 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"redhat-operators-vv5qk\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.256469 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.261855 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.262766 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.762741924 +0000 UTC m=+130.931520454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.319742 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerStarted","Data":"09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.319799 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerStarted","Data":"2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.322484 5115 generic.go:358] "Generic (PLEG): container finished" podID="8b758f72-1c19-45ea-8f26-580952f254a6" containerID="bc05a2904480cda612c996cbe03bed8e6889a08a812820a545bd5567edf848da" exitCode=0 Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.322654 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"bc05a2904480cda612c996cbe03bed8e6889a08a812820a545bd5567edf848da"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.322679 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerStarted","Data":"d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.328360 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"4d9ad503e31517d22d202d7525f5c2ff549e311ae1997fc22f3fe1f8e1bcd594"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.331574 5115 generic.go:358] "Generic (PLEG): container finished" podID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerID="292ea7ef1a462b0b3647f2424736d354073f39a37c563e3f2ffad608521d16f7" exitCode=0 Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.331705 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"292ea7ef1a462b0b3647f2424736d354073f39a37c563e3f2ffad608521d16f7"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.336957 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3","Type":"ContainerStarted","Data":"b4fa9a1ceaf5ad43ffd3fee419d8a0356e096f72ca2c6d2218b303494b3f72a4"} Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.342179 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4x4rk" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.363978 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.364371 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.864357766 +0000 UTC m=+131.033136296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.433379 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55296: no serving certificate available for the kubelet" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.471119 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.472679 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:20.972652849 +0000 UTC m=+131.141431379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.573075 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.573533 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.073513832 +0000 UTC m=+131.242292362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.674637 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.675127 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.175104984 +0000 UTC m=+131.343883514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.722262 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.729331 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.742034 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:20 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:20 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:20 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.742115 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.782589 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") pod \"082f3bd2-f112-4f2e-b955-0826aac6df97\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.782937 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") pod \"082f3bd2-f112-4f2e-b955-0826aac6df97\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.783240 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") pod \"082f3bd2-f112-4f2e-b955-0826aac6df97\" (UID: \"082f3bd2-f112-4f2e-b955-0826aac6df97\") " Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.783401 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.783777 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.283756426 +0000 UTC m=+131.452534966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.787258 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume" (OuterVolumeSpecName: "config-volume") pod "082f3bd2-f112-4f2e-b955-0826aac6df97" (UID: "082f3bd2-f112-4f2e-b955-0826aac6df97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.796431 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "082f3bd2-f112-4f2e-b955-0826aac6df97" (UID: "082f3bd2-f112-4f2e-b955-0826aac6df97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.815175 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.830312 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z" (OuterVolumeSpecName: "kube-api-access-xqt2z") pod "082f3bd2-f112-4f2e-b955-0826aac6df97" (UID: "082f3bd2-f112-4f2e-b955-0826aac6df97"). InnerVolumeSpecName "kube-api-access-xqt2z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.885344 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.885613 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.385572855 +0000 UTC m=+131.554351385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.886493 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqt2z\" (UniqueName: \"kubernetes.io/projected/082f3bd2-f112-4f2e-b955-0826aac6df97-kube-api-access-xqt2z\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.886521 5115 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/082f3bd2-f112-4f2e-b955-0826aac6df97-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.886532 5115 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/082f3bd2-f112-4f2e-b955-0826aac6df97-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:20 crc kubenswrapper[5115]: I0120 09:10:20.988234 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:20 crc kubenswrapper[5115]: E0120 09:10:20.988772 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.488754519 +0000 UTC m=+131.657533049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.089023 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.089397 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.589349085 +0000 UTC m=+131.758127615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.089770 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.090299 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.59027482 +0000 UTC m=+131.759053350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.190995 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.191227 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.691183024 +0000 UTC m=+131.859961554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.191493 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.191958 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.691934845 +0000 UTC m=+131.860713375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.293844 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.294201 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.794156834 +0000 UTC m=+131.962935554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.294788 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.297048 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.797022821 +0000 UTC m=+131.965801351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.326625 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-s5mfg" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.356382 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" event={"ID":"082f3bd2-f112-4f2e-b955-0826aac6df97","Type":"ContainerDied","Data":"08844f14a2be2524b67d25e6d9e317be36bfd5bc9b4b4cda240955fd50dbb961"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.356452 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08844f14a2be2524b67d25e6d9e317be36bfd5bc9b4b4cda240955fd50dbb961" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.356559 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481660-hh6m6" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.366471 5115 generic.go:358] "Generic (PLEG): container finished" podID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerID="7be14e15da24a69df8084edf6f9152bf1adbc9a0753cde445072e14def02ab96" exitCode=0 Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.366614 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3","Type":"ContainerDied","Data":"7be14e15da24a69df8084edf6f9152bf1adbc9a0753cde445072e14def02ab96"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.368771 5115 generic.go:358] "Generic (PLEG): container finished" podID="57355d9d-a14f-4cf0-8a63-842b27765063" containerID="09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea" exitCode=0 Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.368982 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.391803 5115 generic.go:358] "Generic (PLEG): container finished" podID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerID="5c908a7c31ca720aadea8c8fd54b15fdf8ae8be43be8f76f2eb7b5413aeb74c6" exitCode=0 Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.392514 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"5c908a7c31ca720aadea8c8fd54b15fdf8ae8be43be8f76f2eb7b5413aeb74c6"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.392578 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerStarted","Data":"523e078e78e6cfb054a40a6916767e994deee00e08213d3cb61f49d65fa63001"} Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.396787 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.397206 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.897177885 +0000 UTC m=+132.065956415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.397356 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.397648 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.897641557 +0000 UTC m=+132.066420087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.498819 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.499819 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:21.999800094 +0000 UTC m=+132.168578624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.600389 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.601007 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.100980727 +0000 UTC m=+132.269759257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.703017 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.2029759 +0000 UTC m=+132.371754430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.702978 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.703521 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.703953 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.203937055 +0000 UTC m=+132.372715585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.730162 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:21 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:21 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:21 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.730240 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.805098 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.805352 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.305305812 +0000 UTC m=+132.474084382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.805743 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.806541 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.306532515 +0000 UTC m=+132.475311045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.907055 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.907318 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.407279604 +0000 UTC m=+132.576058134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:21 crc kubenswrapper[5115]: I0120 09:10:21.907480 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:21 crc kubenswrapper[5115]: E0120 09:10:21.908022 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.408003604 +0000 UTC m=+132.576782134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.009105 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:22 crc kubenswrapper[5115]: E0120 09:10:22.009829 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.509802143 +0000 UTC m=+132.678580673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.010156 5115 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.111855 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: E0120 09:10:22.114265 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.614250141 +0000 UTC m=+132.783028671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-b674j" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.213275 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:22 crc kubenswrapper[5115]: E0120 09:10:22.213788 5115 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-20 09:10:22.713758928 +0000 UTC m=+132.882537458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.296259 5115 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T09:10:22.010195143Z","UUID":"9eb799e7-b499-4908-bf21-fcb198d19ef3","Handler":null,"Name":"","Endpoint":""} Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.301508 5115 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.301551 5115 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.316149 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.322226 5115 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.322269 5115 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.413506 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-b674j\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.419767 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"484a181d692c0d02e1303d457c80939b89ab87a2400e20ec44047fa6277be2ca"} Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.496497 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.504384 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.523921 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.565060 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.681538 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.727123 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") pod \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.727295 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") pod \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\" (UID: \"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3\") " Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.727593 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" (UID: "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.732965 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:22 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:22 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:22 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.733028 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.738830 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" (UID: "f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.830660 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.830711 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:22 crc kubenswrapper[5115]: I0120 09:10:22.913225 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:10:22 crc kubenswrapper[5115]: W0120 09:10:22.925395 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod580c8ecd_e9bb_4c33_aeb2_f304adb8119c.slice/crio-d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717 WatchSource:0}: Error finding container d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717: Status 404 returned error can't find the container with id d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717 Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.436952 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"d0e95563e64cf343471c4ee061cce2808083b14923444ba8c0967cdfb0ae61c2"} Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.443356 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3","Type":"ContainerDied","Data":"b4fa9a1ceaf5ad43ffd3fee419d8a0356e096f72ca2c6d2218b303494b3f72a4"} Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.443413 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4fa9a1ceaf5ad43ffd3fee419d8a0356e096f72ca2c6d2218b303494b3f72a4" Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.443410 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.445580 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerStarted","Data":"d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717"} Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.527644 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.531792 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.534784 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:23 crc kubenswrapper[5115]: E0120 09:10:23.534845 5115 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.730220 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:23 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:23 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:23 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:23 crc kubenswrapper[5115]: I0120 09:10:23.730317 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.040809 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041456 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerName="pruner" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041469 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerName="pruner" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041487 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerName="collect-profiles" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041493 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerName="collect-profiles" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041600 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="082f3bd2-f112-4f2e-b955-0826aac6df97" containerName="collect-profiles" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.041614 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f22a9e9c-d2a2-43c0-91ae-e65e338d1fd3" containerName="pruner" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.244851 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.245574 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.248307 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.250279 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.279359 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.306721 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-59xcc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.356081 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.359360 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.451356 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.457878 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xn6qp" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.460541 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.461154 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.460726 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.462501 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" event={"ID":"a8dd6004-2cc4-4971-9dcb-18d8871286b8","Type":"ContainerStarted","Data":"f0e034b3778ca9b13cc062038f8c0b3384de2102bc4b55c42742e4878f817854"} Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.497836 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.506626 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ttcl5" podStartSLOduration=18.506591621 podStartE2EDuration="18.506591621s" podCreationTimestamp="2026-01-20 09:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:24.50131293 +0000 UTC m=+134.670091470" watchObservedRunningTime="2026-01-20 09:10:24.506591621 +0000 UTC m=+134.675370161" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.582942 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.741408 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:24 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:24 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:24 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:24 crc kubenswrapper[5115]: I0120 09:10:24.741932 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.078802 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 20 09:10:25 crc kubenswrapper[5115]: W0120 09:10:25.087987 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfed085de_0c46_4008_90d3_73bfbbbd98e5.slice/crio-6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7 WatchSource:0}: Error finding container 6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7: Status 404 returned error can't find the container with id 6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7 Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.472471 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerStarted","Data":"6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7"} Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.474666 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerStarted","Data":"658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca"} Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.475388 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.499043 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-b674j" podStartSLOduration=114.499024033 podStartE2EDuration="1m54.499024033s" podCreationTimestamp="2026-01-20 09:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:25.494607465 +0000 UTC m=+135.663385995" watchObservedRunningTime="2026-01-20 09:10:25.499024033 +0000 UTC m=+135.667802563" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.586257 5115 ???:1] "http: TLS handshake error from 192.168.126.11:56408: no serving certificate available for the kubelet" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.731518 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:25 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:25 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:25 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.731612 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:25 crc kubenswrapper[5115]: I0120 09:10:25.756240 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-ljj2s" Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.486073 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerStarted","Data":"0a9efaca9446742ac2f456bcbf4723314f9fc1f8ccf1efc98b29a9535d0e685a"} Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.507612 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.507584035 podStartE2EDuration="2.507584035s" podCreationTimestamp="2026-01-20 09:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:26.501726488 +0000 UTC m=+136.670505018" watchObservedRunningTime="2026-01-20 09:10:26.507584035 +0000 UTC m=+136.676362565" Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.729402 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:26 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:26 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:26 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:26 crc kubenswrapper[5115]: I0120 09:10:26.729728 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:27 crc kubenswrapper[5115]: I0120 09:10:27.729095 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:27 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:27 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:27 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:27 crc kubenswrapper[5115]: I0120 09:10:27.729194 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.506938 5115 generic.go:358] "Generic (PLEG): container finished" podID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerID="0a9efaca9446742ac2f456bcbf4723314f9fc1f8ccf1efc98b29a9535d0e685a" exitCode=0 Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.507145 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerDied","Data":"0a9efaca9446742ac2f456bcbf4723314f9fc1f8ccf1efc98b29a9535d0e685a"} Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.729066 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:28 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:28 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:28 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:28 crc kubenswrapper[5115]: I0120 09:10:28.729162 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.532380 5115 patch_prober.go:28] interesting pod/console-64d44f6ddf-78z8z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.532483 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-78z8z" podUID="9aa837bd-63fc-4bb8-b158-d8632117a117" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.729324 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:29 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:29 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:29 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:29 crc kubenswrapper[5115]: I0120 09:10:29.729409 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:30 crc kubenswrapper[5115]: I0120 09:10:30.729388 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:30 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:30 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:30 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:30 crc kubenswrapper[5115]: I0120 09:10:30.729500 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:31 crc kubenswrapper[5115]: I0120 09:10:31.731005 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:31 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:31 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:31 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:31 crc kubenswrapper[5115]: I0120 09:10:31.731636 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:31 crc kubenswrapper[5115]: I0120 09:10:31.847018 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pnd9p" Jan 20 09:10:32 crc kubenswrapper[5115]: I0120 09:10:32.729595 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:32 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:32 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:32 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:32 crc kubenswrapper[5115]: I0120 09:10:32.729709 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.528167 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.530290 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.531477 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:33 crc kubenswrapper[5115]: E0120 09:10:33.531524 5115 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 20 09:10:33 crc kubenswrapper[5115]: I0120 09:10:33.729650 5115 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-n9hxc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 09:10:33 crc kubenswrapper[5115]: [-]has-synced failed: reason withheld Jan 20 09:10:33 crc kubenswrapper[5115]: [+]process-running ok Jan 20 09:10:33 crc kubenswrapper[5115]: healthz check failed Jan 20 09:10:33 crc kubenswrapper[5115]: I0120 09:10:33.729738 5115 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" podUID="0d738dd6-3c15-4131-837d-591792cb41cd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 09:10:34 crc kubenswrapper[5115]: I0120 09:10:34.729672 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:34 crc kubenswrapper[5115]: I0120 09:10:34.734641 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-n9hxc" Jan 20 09:10:35 crc kubenswrapper[5115]: I0120 09:10:35.851676 5115 ???:1] "http: TLS handshake error from 192.168.126.11:40232: no serving certificate available for the kubelet" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.095492 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.096491 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" containerID="cri-o://883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1" gracePeriod=30 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.159301 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.159612 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" containerID="cri-o://b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01" gracePeriod=30 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.325501 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.502830 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") pod \"fed085de-0c46-4008-90d3-73bfbbbd98e5\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.502925 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") pod \"fed085de-0c46-4008-90d3-73bfbbbd98e5\" (UID: \"fed085de-0c46-4008-90d3-73bfbbbd98e5\") " Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.503390 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fed085de-0c46-4008-90d3-73bfbbbd98e5" (UID: "fed085de-0c46-4008-90d3-73bfbbbd98e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.517227 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fed085de-0c46-4008-90d3-73bfbbbd98e5" (UID: "fed085de-0c46-4008-90d3-73bfbbbd98e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.574672 5115 generic.go:358] "Generic (PLEG): container finished" podID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerID="883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1" exitCode=0 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.575265 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerDied","Data":"883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1"} Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.576846 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"fed085de-0c46-4008-90d3-73bfbbbd98e5","Type":"ContainerDied","Data":"6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7"} Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.576886 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6268286b4b9de913e928d9c698d1aaf7314242b0196f0c48150a63a078ee04b7" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.576938 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.579873 5115 generic.go:358] "Generic (PLEG): container finished" podID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerID="b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01" exitCode=0 Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.579967 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerDied","Data":"b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01"} Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.604686 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fed085de-0c46-4008-90d3-73bfbbbd98e5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:37 crc kubenswrapper[5115]: I0120 09:10:37.604735 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed085de-0c46-4008-90d3-73bfbbbd98e5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:39 crc kubenswrapper[5115]: I0120 09:10:39.538262 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:39 crc kubenswrapper[5115]: I0120 09:10:39.545045 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-78z8z" Jan 20 09:10:42 crc kubenswrapper[5115]: I0120 09:10:42.988580 5115 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-lg8fb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 20 09:10:42 crc kubenswrapper[5115]: I0120 09:10:42.989249 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.528645 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.532024 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.534325 5115 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 20 09:10:43 crc kubenswrapper[5115]: E0120 09:10:43.534407 5115 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 20 09:10:45 crc kubenswrapper[5115]: I0120 09:10:45.682088 5115 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-jxpqr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 20 09:10:45 crc kubenswrapper[5115]: I0120 09:10:45.683184 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 20 09:10:46 crc kubenswrapper[5115]: I0120 09:10:46.493766 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:10:47 crc kubenswrapper[5115]: I0120 09:10:47.692334 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.316138 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-gc77j" Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.667702 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkz7s_4d93cff2-21b0-4fcb-b899-b6efe5a56822/kube-multus-additional-cni-plugins/0.log" Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.667779 5115 generic.go:358] "Generic (PLEG): container finished" podID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" exitCode=137 Jan 20 09:10:49 crc kubenswrapper[5115]: I0120 09:10:49.667842 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerDied","Data":"fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.366236 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkz7s_4d93cff2-21b0-4fcb-b899-b6efe5a56822/kube-multus-additional-cni-plugins/0.log" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.366648 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419190 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419430 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419544 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.419575 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") pod \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\" (UID: \"4d93cff2-21b0-4fcb-b899-b6efe5a56822\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.420058 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.420074 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready" (OuterVolumeSpecName: "ready") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.420881 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.431882 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw" (OuterVolumeSpecName: "kube-api-access-x65zw") pod "4d93cff2-21b0-4fcb-b899-b6efe5a56822" (UID: "4d93cff2-21b0-4fcb-b899-b6efe5a56822"). InnerVolumeSpecName "kube-api-access-x65zw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522934 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x65zw\" (UniqueName: \"kubernetes.io/projected/4d93cff2-21b0-4fcb-b899-b6efe5a56822-kube-api-access-x65zw\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522974 5115 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4d93cff2-21b0-4fcb-b899-b6efe5a56822-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522984 5115 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4d93cff2-21b0-4fcb-b899-b6efe5a56822-ready\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.522994 5115 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4d93cff2-21b0-4fcb-b899-b6efe5a56822-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678263 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pkz7s_4d93cff2-21b0-4fcb-b899-b6efe5a56822/kube-multus-additional-cni-plugins/0.log" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678514 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" event={"ID":"4d93cff2-21b0-4fcb-b899-b6efe5a56822","Type":"ContainerDied","Data":"857692043d4e2a0e52ae73c61d049790e037f8377cfd4c3084e2ea0725ae7c00"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678582 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pkz7s" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.678629 5115 scope.go:117] "RemoveContainer" containerID="fbd3f92e049db05dae4cc895fdc510d06b5848377015dd755d42e4d740ef5442" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.684432 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerStarted","Data":"a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.698575 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerStarted","Data":"099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.706013 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.709368 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerStarted","Data":"ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e"} Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.754846 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.762110 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pkz7s"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829437 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829515 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829616 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829660 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.829762 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") pod \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\" (UID: \"b39cc292-22ad-4fb0-9d3f-6467c81680eb\") " Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.834392 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835623 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835645 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835659 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerName="pruner" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835666 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerName="pruner" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835689 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835696 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835860 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" containerName="kube-multus-additional-cni-plugins" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835875 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" containerName="route-controller-manager" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.835884 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="fed085de-0c46-4008-90d3-73bfbbbd98e5" containerName="pruner" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.838785 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.843990 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config" (OuterVolumeSpecName: "config") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.846416 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp" (OuterVolumeSpecName: "tmp") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.853023 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.853255 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.854101 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf" (OuterVolumeSpecName: "kube-api-access-cj2cf") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "kube-api-access-cj2cf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.856358 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b39cc292-22ad-4fb0-9d3f-6467c81680eb" (UID: "b39cc292-22ad-4fb0-9d3f-6467c81680eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933123 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933619 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933647 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933735 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933767 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933819 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cj2cf\" (UniqueName: \"kubernetes.io/projected/b39cc292-22ad-4fb0-9d3f-6467c81680eb-kube-api-access-cj2cf\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933832 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933844 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39cc292-22ad-4fb0-9d3f-6467c81680eb-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933855 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b39cc292-22ad-4fb0-9d3f-6467c81680eb-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:50 crc kubenswrapper[5115]: I0120 09:10:50.933865 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39cc292-22ad-4fb0-9d3f-6467c81680eb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035033 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035122 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035154 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035208 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.035242 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.036380 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.036929 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.037278 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.044376 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.060435 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"route-controller-manager-64b4fd558d-xn8z9\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.203237 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.204714 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.228968 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.229695 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.229718 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.229820 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" containerName="controller-manager" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.242338 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.242519 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339331 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339698 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339753 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339841 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.339859 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") pod \"664dc1e9-b220-4dd9-8576-b5798850bc57\" (UID: \"664dc1e9-b220-4dd9-8576-b5798850bc57\") " Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340004 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340045 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340061 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340078 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340135 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.340151 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.342334 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp" (OuterVolumeSpecName: "tmp") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.342817 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.343012 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca" (OuterVolumeSpecName: "client-ca") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.343289 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config" (OuterVolumeSpecName: "config") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.355292 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.362662 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts" (OuterVolumeSpecName: "kube-api-access-697ts") pod "664dc1e9-b220-4dd9-8576-b5798850bc57" (UID: "664dc1e9-b220-4dd9-8576-b5798850bc57"). InnerVolumeSpecName "kube-api-access-697ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.439488 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.441703 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.441930 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442223 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442283 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442301 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442328 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442451 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442461 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/664dc1e9-b220-4dd9-8576-b5798850bc57-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442471 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-697ts\" (UniqueName: \"kubernetes.io/projected/664dc1e9-b220-4dd9-8576-b5798850bc57-kube-api-access-697ts\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442486 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/664dc1e9-b220-4dd9-8576-b5798850bc57-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442494 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.442504 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/664dc1e9-b220-4dd9-8576-b5798850bc57-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.443204 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.443971 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.444533 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.444877 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.448588 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.458504 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"controller-manager-8469db6cb8-pclzc\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: W0120 09:10:51.479260 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6822615_2e54_40b4_a17f_9d5fb26e31db.slice/crio-ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88 WatchSource:0}: Error finding container ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88: Status 404 returned error can't find the container with id ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.589201 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.729768 5115 generic.go:358] "Generic (PLEG): container finished" podID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerID="099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.729861 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.733702 5115 generic.go:358] "Generic (PLEG): container finished" podID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.734070 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.736207 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" event={"ID":"664dc1e9-b220-4dd9-8576-b5798850bc57","Type":"ContainerDied","Data":"11a76b2995d1e7821d8b5caa00d0b12a5012c7b092dc0a7b36b27b7457c6f577"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.736266 5115 scope.go:117] "RemoveContainer" containerID="883ad34e44bc13a65fb331c725c96d57ffd7da473ec9ed16860ba076f2702bf1" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.736438 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-lg8fb" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.744239 5115 generic.go:358] "Generic (PLEG): container finished" podID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.744317 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.748298 5115 generic.go:358] "Generic (PLEG): container finished" podID="098c57a3-a775-41d0-b528-6833df51eb70" containerID="ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.748386 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.766712 5115 generic.go:358] "Generic (PLEG): container finished" podID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerID="74b5178a1b534ac941dea2392034f3b3ec2731f44ad8c1e9849d9151b8564a9d" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.766863 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"74b5178a1b534ac941dea2392034f3b3ec2731f44ad8c1e9849d9151b8564a9d"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.780563 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.780586 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr" event={"ID":"b39cc292-22ad-4fb0-9d3f-6467c81680eb","Type":"ContainerDied","Data":"5fb596da1738dbe8416b2b3a595dc262a4288da61aa3303a2ea6eb0db0479d63"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.781648 5115 scope.go:117] "RemoveContainer" containerID="b0488d20e94845aedd9b1bbe8d5471305129edf3c1b7b5a598c3cede13658a01" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.793643 5115 generic.go:358] "Generic (PLEG): container finished" podID="57355d9d-a14f-4cf0-8a63-842b27765063" containerID="1c7349b861fcc3cdec3f5eaa960ebb43329afec1ce06d636fabc17f9cb7e20c8" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.793747 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"1c7349b861fcc3cdec3f5eaa960ebb43329afec1ce06d636fabc17f9cb7e20c8"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.801160 5115 generic.go:358] "Generic (PLEG): container finished" podID="8b758f72-1c19-45ea-8f26-580952f254a6" containerID="935cf80d7a9856e0a66b21d9b86b0fed97665532ad80b040c550b50951c14c19" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.801439 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"935cf80d7a9856e0a66b21d9b86b0fed97665532ad80b040c550b50951c14c19"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.807639 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerStarted","Data":"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.807702 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerStarted","Data":"ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.808578 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.822377 5115 generic.go:358] "Generic (PLEG): container finished" podID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerID="a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f" exitCode=0 Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.822655 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f"} Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.856294 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:51 crc kubenswrapper[5115]: W0120 09:10:51.874866 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88667356_ca96_429b_a986_2018168d5da2.slice/crio-bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f WatchSource:0}: Error finding container bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f: Status 404 returned error can't find the container with id bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.886421 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.897252 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-lg8fb"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.902027 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.904948 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-jxpqr"] Jan 20 09:10:51 crc kubenswrapper[5115]: I0120 09:10:51.919697 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" podStartSLOduration=14.919666294 podStartE2EDuration="14.919666294s" podCreationTimestamp="2026-01-20 09:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:51.914102435 +0000 UTC m=+162.082880965" watchObservedRunningTime="2026-01-20 09:10:51.919666294 +0000 UTC m=+162.088444824" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.224442 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d93cff2-21b0-4fcb-b899-b6efe5a56822" path="/var/lib/kubelet/pods/4d93cff2-21b0-4fcb-b899-b6efe5a56822/volumes" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.225756 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664dc1e9-b220-4dd9-8576-b5798850bc57" path="/var/lib/kubelet/pods/664dc1e9-b220-4dd9-8576-b5798850bc57/volumes" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.226432 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39cc292-22ad-4fb0-9d3f-6467c81680eb" path="/var/lib/kubelet/pods/b39cc292-22ad-4fb0-9d3f-6467c81680eb/volumes" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.696980 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.832188 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerStarted","Data":"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.834783 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerStarted","Data":"262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.837603 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerStarted","Data":"094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.841528 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerStarted","Data":"3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.845502 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerStarted","Data":"fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.848120 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerStarted","Data":"c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.850299 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerStarted","Data":"16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.854410 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerStarted","Data":"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.856554 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerStarted","Data":"787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.856598 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerStarted","Data":"bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f"} Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.859228 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mrnvw" podStartSLOduration=5.700094382 podStartE2EDuration="36.85920782s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.258009508 +0000 UTC m=+129.426788038" lastFinishedPulling="2026-01-20 09:10:50.417122946 +0000 UTC m=+160.585901476" observedRunningTime="2026-01-20 09:10:52.851880404 +0000 UTC m=+163.020658944" watchObservedRunningTime="2026-01-20 09:10:52.85920782 +0000 UTC m=+163.027986350" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.895467 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5plkc" podStartSLOduration=4.893027077 podStartE2EDuration="34.89544151s" podCreationTimestamp="2026-01-20 09:10:18 +0000 UTC" firstStartedPulling="2026-01-20 09:10:20.332881573 +0000 UTC m=+130.501660103" lastFinishedPulling="2026-01-20 09:10:50.335295996 +0000 UTC m=+160.504074536" observedRunningTime="2026-01-20 09:10:52.872609198 +0000 UTC m=+163.041387748" watchObservedRunningTime="2026-01-20 09:10:52.89544151 +0000 UTC m=+163.064220040" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.931956 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b5s99" podStartSLOduration=4.869077975 podStartE2EDuration="34.931934246s" podCreationTimestamp="2026-01-20 09:10:18 +0000 UTC" firstStartedPulling="2026-01-20 09:10:20.323560493 +0000 UTC m=+130.492339023" lastFinishedPulling="2026-01-20 09:10:50.386416764 +0000 UTC m=+160.555195294" observedRunningTime="2026-01-20 09:10:52.930255601 +0000 UTC m=+163.099034131" watchObservedRunningTime="2026-01-20 09:10:52.931934246 +0000 UTC m=+163.100712776" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.932210 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vv5qk" podStartSLOduration=4.94167652 podStartE2EDuration="33.932203833s" podCreationTimestamp="2026-01-20 09:10:19 +0000 UTC" firstStartedPulling="2026-01-20 09:10:21.393668001 +0000 UTC m=+131.562446531" lastFinishedPulling="2026-01-20 09:10:50.384195314 +0000 UTC m=+160.552973844" observedRunningTime="2026-01-20 09:10:52.898505912 +0000 UTC m=+163.067284442" watchObservedRunningTime="2026-01-20 09:10:52.932203833 +0000 UTC m=+163.100982363" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.949826 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2dlnj" podStartSLOduration=5.897830594 podStartE2EDuration="36.949799533s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.310502875 +0000 UTC m=+129.479281405" lastFinishedPulling="2026-01-20 09:10:50.362471814 +0000 UTC m=+160.531250344" observedRunningTime="2026-01-20 09:10:52.948270903 +0000 UTC m=+163.117049433" watchObservedRunningTime="2026-01-20 09:10:52.949799533 +0000 UTC m=+163.118578083" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.972865 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-45pv6" podStartSLOduration=4.927397776 podStartE2EDuration="33.97284215s" podCreationTimestamp="2026-01-20 09:10:19 +0000 UTC" firstStartedPulling="2026-01-20 09:10:21.369835852 +0000 UTC m=+131.538614382" lastFinishedPulling="2026-01-20 09:10:50.415280226 +0000 UTC m=+160.584058756" observedRunningTime="2026-01-20 09:10:52.970765574 +0000 UTC m=+163.139544104" watchObservedRunningTime="2026-01-20 09:10:52.97284215 +0000 UTC m=+163.141620670" Jan 20 09:10:52 crc kubenswrapper[5115]: I0120 09:10:52.992727 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ln8lc" podStartSLOduration=5.911964232 podStartE2EDuration="36.992705562s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.281413085 +0000 UTC m=+129.450191615" lastFinishedPulling="2026-01-20 09:10:50.362154415 +0000 UTC m=+160.530932945" observedRunningTime="2026-01-20 09:10:52.987173114 +0000 UTC m=+163.155951654" watchObservedRunningTime="2026-01-20 09:10:52.992705562 +0000 UTC m=+163.161484092" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.041043 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" podStartSLOduration=16.041025115 podStartE2EDuration="16.041025115s" podCreationTimestamp="2026-01-20 09:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:53.013216441 +0000 UTC m=+163.181994971" watchObservedRunningTime="2026-01-20 09:10:53.041025115 +0000 UTC m=+163.209803645" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.043137 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn6h9" podStartSLOduration=5.910013619 podStartE2EDuration="37.043129711s" podCreationTimestamp="2026-01-20 09:10:16 +0000 UTC" firstStartedPulling="2026-01-20 09:10:19.251091583 +0000 UTC m=+129.419870113" lastFinishedPulling="2026-01-20 09:10:50.384207675 +0000 UTC m=+160.552986205" observedRunningTime="2026-01-20 09:10:53.038810685 +0000 UTC m=+163.207589215" watchObservedRunningTime="2026-01-20 09:10:53.043129711 +0000 UTC m=+163.211908231" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.862466 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:53 crc kubenswrapper[5115]: I0120 09:10:53.868429 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.340828 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.379330 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.380091 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" containerID="cri-o://8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" gracePeriod=30 Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.758526 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.790694 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.805260 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.805295 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.805459 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerName="route-controller-manager" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.851621 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.851811 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.878754 5115 generic.go:358] "Generic (PLEG): container finished" podID="f6822615-2e54-40b4-a17f-9d5fb26e31db" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" exitCode=0 Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879604 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879710 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerDied","Data":"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319"} Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879756 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9" event={"ID":"f6822615-2e54-40b4-a17f-9d5fb26e31db","Type":"ContainerDied","Data":"ec45a798e536db03e60218024cbf350c164b3ba144b87a54d410e02900429d88"} Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.879783 5115 scope.go:117] "RemoveContainer" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.904068 5115 scope.go:117] "RemoveContainer" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" Jan 20 09:10:55 crc kubenswrapper[5115]: E0120 09:10:55.904639 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319\": container with ID starting with 8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319 not found: ID does not exist" containerID="8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.904685 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319"} err="failed to get container status \"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319\": rpc error: code = NotFound desc = could not find container \"8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319\": container with ID starting with 8d2ed5414b2aa5ec8036fc1daac27d0b00dca8637fc9aa123e503905f2f66319 not found: ID does not exist" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912660 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912732 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912797 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912830 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.912922 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") pod \"f6822615-2e54-40b4-a17f-9d5fb26e31db\" (UID: \"f6822615-2e54-40b4-a17f-9d5fb26e31db\") " Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.913281 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp" (OuterVolumeSpecName: "tmp") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.913998 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.914035 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config" (OuterVolumeSpecName: "config") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.921362 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:55 crc kubenswrapper[5115]: I0120 09:10:55.922909 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq" (OuterVolumeSpecName: "kube-api-access-hd9fq") pod "f6822615-2e54-40b4-a17f-9d5fb26e31db" (UID: "f6822615-2e54-40b4-a17f-9d5fb26e31db"). InnerVolumeSpecName "kube-api-access-hd9fq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015160 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015225 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015291 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.015311 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016242 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016667 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6822615-2e54-40b4-a17f-9d5fb26e31db-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016774 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016871 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hd9fq\" (UniqueName: \"kubernetes.io/projected/f6822615-2e54-40b4-a17f-9d5fb26e31db-kube-api-access-hd9fq\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.016995 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6822615-2e54-40b4-a17f-9d5fb26e31db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.017089 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6822615-2e54-40b4-a17f-9d5fb26e31db-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.118508 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119131 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119167 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119244 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.119267 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.120434 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.120780 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.121566 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.132098 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.138546 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"route-controller-manager-6dbb47955d-p9csw\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.167884 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.216321 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.225768 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64b4fd558d-xn8z9"] Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.369879 5115 ???:1] "http: TLS handshake error from 192.168.126.11:37614: no serving certificate available for the kubelet" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.641810 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.668928 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.668982 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.868136 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.893622 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerStarted","Data":"ecc488089e2907ad65741a46b809cf94a5a4a9b7392b79f53726c2b0b4d5c94f"} Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.911628 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.911702 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:56 crc kubenswrapper[5115]: I0120 09:10:56.961098 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.123491 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.123566 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.161677 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.247010 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.397668 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" containerID="cri-o://787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64" gracePeriod=30 Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674778 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674825 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674843 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.674886 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.675029 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.675246 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.675357 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.677257 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.679559 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.680339 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.843842 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.844416 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.900474 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerStarted","Data":"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be"} Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.902294 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.904478 5115 generic.go:358] "Generic (PLEG): container finished" podID="88667356-ca96-429b-a986-2018168d5da2" containerID="787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64" exitCode=0 Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.905526 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerDied","Data":"787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64"} Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.921792 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" podStartSLOduration=2.921766562 podStartE2EDuration="2.921766562s" podCreationTimestamp="2026-01-20 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:57.91979574 +0000 UTC m=+168.088574270" watchObservedRunningTime="2026-01-20 09:10:57.921766562 +0000 UTC m=+168.090545102" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.945612 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.945723 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.945817 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.970038 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.975767 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:10:57 crc kubenswrapper[5115]: I0120 09:10:57.993329 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.226074 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6822615-2e54-40b4-a17f-9d5fb26e31db" path="/var/lib/kubelet/pods/f6822615-2e54-40b4-a17f-9d5fb26e31db/volumes" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.308238 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.418352 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.537593 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.680122 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.720612 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.721644 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.721660 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.721855 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="88667356-ca96-429b-a986-2018168d5da2" containerName="controller-manager" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.768018 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.780177 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.854018 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.854073 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866423 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866571 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866647 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866752 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866822 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.866949 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") pod \"88667356-ca96-429b-a986-2018168d5da2\" (UID: \"88667356-ca96-429b-a986-2018168d5da2\") " Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867109 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867143 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867181 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867211 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867284 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867354 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.867910 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.868200 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp" (OuterVolumeSpecName: "tmp") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.868646 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config" (OuterVolumeSpecName: "config") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.868771 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca" (OuterVolumeSpecName: "client-ca") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.872926 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn" (OuterVolumeSpecName: "kube-api-access-l4kgn") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "kube-api-access-l4kgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.872939 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88667356-ca96-429b-a986-2018168d5da2" (UID: "88667356-ca96-429b-a986-2018168d5da2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.901698 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.916529 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.917142 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8469db6cb8-pclzc" event={"ID":"88667356-ca96-429b-a986-2018168d5da2","Type":"ContainerDied","Data":"bc82955899180d05360cc4862d2c67462685d6b730e0a5fb73668f78e7e7679f"} Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.917193 5115 scope.go:117] "RemoveContainer" containerID="787d6296c837165f0031e2f3b6f84cf69106700382a0334b57d327ab1bd28e64" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.919461 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5e702626-2df6-4412-a9e4-9b6046e5d143","Type":"ContainerStarted","Data":"3a45326bcfd846639f58cac83f8e8699a7606ca325de927d7dc1eacf7e6baf6a"} Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.956670 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.960970 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8469db6cb8-pclzc"] Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969033 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969120 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969152 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969186 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969215 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969259 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969326 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/88667356-ca96-429b-a986-2018168d5da2-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969343 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969359 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969375 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88667356-ca96-429b-a986-2018168d5da2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969394 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4kgn\" (UniqueName: \"kubernetes.io/projected/88667356-ca96-429b-a986-2018168d5da2-kube-api-access-l4kgn\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.969409 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88667356-ca96-429b-a986-2018168d5da2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.971653 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.972337 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.974030 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.974434 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.975416 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.977959 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:58 crc kubenswrapper[5115]: I0120 09:10:58.988005 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"controller-manager-856c8c4494-gzm5q\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.099161 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.138351 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.138404 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.190246 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.529686 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:10:59 crc kubenswrapper[5115]: W0120 09:10:59.537112 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3904fe4_fb4d_4794_8d28_a76e420c437f.slice/crio-c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056 WatchSource:0}: Error finding container c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056: Status 404 returned error can't find the container with id c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056 Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.831660 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.832240 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.887693 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.936672 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerStarted","Data":"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a"} Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.936733 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerStarted","Data":"c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056"} Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.937105 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.938356 5115 generic.go:358] "Generic (PLEG): container finished" podID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerID="e6103a0933a658cea6904c3a48521826045b7fe22397fb3db0c7bb8cc7460e00" exitCode=0 Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.938954 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5e702626-2df6-4412-a9e4-9b6046e5d143","Type":"ContainerDied","Data":"e6103a0933a658cea6904c3a48521826045b7fe22397fb3db0c7bb8cc7460e00"} Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.939097 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn6h9" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" containerID="cri-o://310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" gracePeriod=2 Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.958593 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" podStartSLOduration=4.958560834 podStartE2EDuration="4.958560834s" podCreationTimestamp="2026-01-20 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:10:59.957363202 +0000 UTC m=+170.126141752" watchObservedRunningTime="2026-01-20 09:10:59.958560834 +0000 UTC m=+170.127339384" Jan 20 09:10:59 crc kubenswrapper[5115]: I0120 09:10:59.988116 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.002741 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.109258 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.187175 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.230183 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88667356-ca96-429b-a986-2018168d5da2" path="/var/lib/kubelet/pods/88667356-ca96-429b-a986-2018168d5da2/volumes" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.258260 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.259089 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.314254 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.476703 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.611085 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") pod \"c182ef91-1ca8-4330-bd75-8120c4401b54\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.611186 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") pod \"c182ef91-1ca8-4330-bd75-8120c4401b54\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.611272 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") pod \"c182ef91-1ca8-4330-bd75-8120c4401b54\" (UID: \"c182ef91-1ca8-4330-bd75-8120c4401b54\") " Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.613167 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities" (OuterVolumeSpecName: "utilities") pod "c182ef91-1ca8-4330-bd75-8120c4401b54" (UID: "c182ef91-1ca8-4330-bd75-8120c4401b54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.620371 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx" (OuterVolumeSpecName: "kube-api-access-fp6vx") pod "c182ef91-1ca8-4330-bd75-8120c4401b54" (UID: "c182ef91-1ca8-4330-bd75-8120c4401b54"). InnerVolumeSpecName "kube-api-access-fp6vx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.660103 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c182ef91-1ca8-4330-bd75-8120c4401b54" (UID: "c182ef91-1ca8-4330-bd75-8120c4401b54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.713519 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.713553 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c182ef91-1ca8-4330-bd75-8120c4401b54-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.713563 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fp6vx\" (UniqueName: \"kubernetes.io/projected/c182ef91-1ca8-4330-bd75-8120c4401b54-kube-api-access-fp6vx\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948359 5115 generic.go:358] "Generic (PLEG): container finished" podID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" exitCode=0 Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948503 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74"} Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948577 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn6h9" event={"ID":"c182ef91-1ca8-4330-bd75-8120c4401b54","Type":"ContainerDied","Data":"91ffd30d0b07fe8b71ba5e2b62abd0321e935c136baf579cb7b5b85fbfc8da21"} Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948607 5115 scope.go:117] "RemoveContainer" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" Jan 20 09:11:00 crc kubenswrapper[5115]: I0120 09:11:00.948803 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn6h9" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.006285 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ln8lc" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" containerID="cri-o://262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d" gracePeriod=2 Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.069833 5115 scope.go:117] "RemoveContainer" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.070188 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.097925 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.109471 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn6h9"] Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.126579 5115 scope.go:117] "RemoveContainer" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.157466 5115 scope.go:117] "RemoveContainer" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" Jan 20 09:11:01 crc kubenswrapper[5115]: E0120 09:11:01.159147 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74\": container with ID starting with 310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74 not found: ID does not exist" containerID="310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.159179 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74"} err="failed to get container status \"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74\": rpc error: code = NotFound desc = could not find container \"310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74\": container with ID starting with 310176ff3faa068eec35b262e875ff2ef66e7e5cb3cf7c06006974317bf85b74 not found: ID does not exist" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.159208 5115 scope.go:117] "RemoveContainer" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" Jan 20 09:11:01 crc kubenswrapper[5115]: E0120 09:11:01.160875 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036\": container with ID starting with 288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036 not found: ID does not exist" containerID="288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.160929 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036"} err="failed to get container status \"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036\": rpc error: code = NotFound desc = could not find container \"288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036\": container with ID starting with 288865d63bc61bc4176419a2d913e42143434094aaa92d600adfeadef0831036 not found: ID does not exist" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.160944 5115 scope.go:117] "RemoveContainer" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" Jan 20 09:11:01 crc kubenswrapper[5115]: E0120 09:11:01.161940 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a\": container with ID starting with cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a not found: ID does not exist" containerID="cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.161958 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a"} err="failed to get container status \"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a\": rpc error: code = NotFound desc = could not find container \"cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a\": container with ID starting with cb35ec44b7685ef3772567937b1f41239bca24193257b445ea714ac16c6bf55a not found: ID does not exist" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.237665 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321144 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") pod \"5e702626-2df6-4412-a9e4-9b6046e5d143\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321249 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") pod \"5e702626-2df6-4412-a9e4-9b6046e5d143\" (UID: \"5e702626-2df6-4412-a9e4-9b6046e5d143\") " Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321263 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e702626-2df6-4412-a9e4-9b6046e5d143" (UID: "5e702626-2df6-4412-a9e4-9b6046e5d143"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.321550 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e702626-2df6-4412-a9e4-9b6046e5d143-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.329764 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e702626-2df6-4412-a9e4-9b6046e5d143" (UID: "5e702626-2df6-4412-a9e4-9b6046e5d143"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.423127 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e702626-2df6-4412-a9e4-9b6046e5d143-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.961683 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.961714 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"5e702626-2df6-4412-a9e4-9b6046e5d143","Type":"ContainerDied","Data":"3a45326bcfd846639f58cac83f8e8699a7606ca325de927d7dc1eacf7e6baf6a"} Jan 20 09:11:01 crc kubenswrapper[5115]: I0120 09:11:01.965321 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a45326bcfd846639f58cac83f8e8699a7606ca325de927d7dc1eacf7e6baf6a" Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.229856 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" path="/var/lib/kubelet/pods/c182ef91-1ca8-4330-bd75-8120c4401b54/volumes" Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.515798 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.516307 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b5s99" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" containerID="cri-o://fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9" gracePeriod=2 Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.974440 5115 generic.go:358] "Generic (PLEG): container finished" podID="8b758f72-1c19-45ea-8f26-580952f254a6" containerID="fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9" exitCode=0 Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.974551 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9"} Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.979631 5115 generic.go:358] "Generic (PLEG): container finished" podID="098c57a3-a775-41d0-b528-6833df51eb70" containerID="262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d" exitCode=0 Jan 20 09:11:02 crc kubenswrapper[5115]: I0120 09:11:02.979689 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d"} Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.047576 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048415 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-content" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048439 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-content" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048464 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048473 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048502 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerName="pruner" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048510 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerName="pruner" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048526 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-utilities" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048534 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="extract-utilities" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048644 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="c182ef91-1ca8-4330-bd75-8120c4401b54" containerName="registry-server" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.048666 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e702626-2df6-4412-a9e4-9b6046e5d143" containerName="pruner" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.055337 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.057840 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.058313 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.069189 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.110300 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.152678 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.152727 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.152915 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.161989 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254280 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") pod \"098c57a3-a775-41d0-b528-6833df51eb70\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254374 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") pod \"098c57a3-a775-41d0-b528-6833df51eb70\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254410 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") pod \"098c57a3-a775-41d0-b528-6833df51eb70\" (UID: \"098c57a3-a775-41d0-b528-6833df51eb70\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254720 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254749 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254846 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.254959 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.255018 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.256026 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities" (OuterVolumeSpecName: "utilities") pod "098c57a3-a775-41d0-b528-6833df51eb70" (UID: "098c57a3-a775-41d0-b528-6833df51eb70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.267308 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z" (OuterVolumeSpecName: "kube-api-access-ft22z") pod "098c57a3-a775-41d0-b528-6833df51eb70" (UID: "098c57a3-a775-41d0-b528-6833df51eb70"). InnerVolumeSpecName "kube-api-access-ft22z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.287925 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "098c57a3-a775-41d0-b528-6833df51eb70" (UID: "098c57a3-a775-41d0-b528-6833df51eb70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.289196 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"installer-12-crc\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.356352 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ft22z\" (UniqueName: \"kubernetes.io/projected/098c57a3-a775-41d0-b528-6833df51eb70-kube-api-access-ft22z\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.356912 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.357033 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/098c57a3-a775-41d0-b528-6833df51eb70-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.381969 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.432076 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.459541 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") pod \"8b758f72-1c19-45ea-8f26-580952f254a6\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.459622 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") pod \"8b758f72-1c19-45ea-8f26-580952f254a6\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.459718 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") pod \"8b758f72-1c19-45ea-8f26-580952f254a6\" (UID: \"8b758f72-1c19-45ea-8f26-580952f254a6\") " Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.460858 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities" (OuterVolumeSpecName: "utilities") pod "8b758f72-1c19-45ea-8f26-580952f254a6" (UID: "8b758f72-1c19-45ea-8f26-580952f254a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.470829 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv" (OuterVolumeSpecName: "kube-api-access-pdgqv") pod "8b758f72-1c19-45ea-8f26-580952f254a6" (UID: "8b758f72-1c19-45ea-8f26-580952f254a6"). InnerVolumeSpecName "kube-api-access-pdgqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.473492 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b758f72-1c19-45ea-8f26-580952f254a6" (UID: "8b758f72-1c19-45ea-8f26-580952f254a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.561362 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.561417 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdgqv\" (UniqueName: \"kubernetes.io/projected/8b758f72-1c19-45ea-8f26-580952f254a6-kube-api-access-pdgqv\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.561430 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b758f72-1c19-45ea-8f26-580952f254a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.593484 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.988119 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ln8lc" event={"ID":"098c57a3-a775-41d0-b528-6833df51eb70","Type":"ContainerDied","Data":"092aa312ded9179826cf1c7718d79766d577bbc74bfdc3260b75b3acb73e6544"} Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.988203 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ln8lc" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.988644 5115 scope.go:117] "RemoveContainer" containerID="262846a0b39ea0c22c3e2461fb7a80f6f691c5c001332b515947c0f30875a14d" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.992226 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b5s99" event={"ID":"8b758f72-1c19-45ea-8f26-580952f254a6","Type":"ContainerDied","Data":"d7901e6ddc7891030f2ad2227e71e157692b55779b1855cb63d09ff8803bd38a"} Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.992259 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b5s99" Jan 20 09:11:03 crc kubenswrapper[5115]: I0120 09:11:03.994028 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerStarted","Data":"2f73af6d69f6c232d9d9d0a495fca6672d15d9b3c8a84a1c612e0ef514970d06"} Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.043462 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.045286 5115 scope.go:117] "RemoveContainer" containerID="ee94f68db59e4e1ddf21ca6ca9dd7fd93edccbc4ea24208558bcdd84d58df32e" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.047062 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b5s99"] Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.087346 5115 scope.go:117] "RemoveContainer" containerID="f88e943d46c00e03b49000272db95a963fb31d5df3dc7dea80bbd32f957cb111" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.110701 5115 scope.go:117] "RemoveContainer" containerID="fc2a291b34f7498fa1d59e04fd9f020e1e86521c4cb4fc751ea58888835018e9" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.129355 5115 scope.go:117] "RemoveContainer" containerID="935cf80d7a9856e0a66b21d9b86b0fed97665532ad80b040c550b50951c14c19" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.150480 5115 scope.go:117] "RemoveContainer" containerID="bc05a2904480cda612c996cbe03bed8e6889a08a812820a545bd5567edf848da" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.476811 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vv5qk" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" containerID="cri-o://16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4" gracePeriod=2 Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.490090 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" path="/var/lib/kubelet/pods/8b758f72-1c19-45ea-8f26-580952f254a6/volumes" Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.490956 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:11:04 crc kubenswrapper[5115]: I0120 09:11:04.490993 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ln8lc"] Jan 20 09:11:05 crc kubenswrapper[5115]: I0120 09:11:05.017559 5115 generic.go:358] "Generic (PLEG): container finished" podID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerID="16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4" exitCode=0 Jan 20 09:11:05 crc kubenswrapper[5115]: I0120 09:11:05.018237 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4"} Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.227759 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098c57a3-a775-41d0-b528-6833df51eb70" path="/var/lib/kubelet/pods/098c57a3-a775-41d0-b528-6833df51eb70/volumes" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.402996 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.515363 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") pod \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.515460 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") pod \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.515501 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") pod \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\" (UID: \"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3\") " Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.517101 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities" (OuterVolumeSpecName: "utilities") pod "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" (UID: "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.523935 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq" (OuterVolumeSpecName: "kube-api-access-w4shq") pod "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" (UID: "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3"). InnerVolumeSpecName "kube-api-access-w4shq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.617921 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4shq\" (UniqueName: \"kubernetes.io/projected/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-kube-api-access-w4shq\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.617965 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.738539 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" (UID: "b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:06 crc kubenswrapper[5115]: I0120 09:11:06.821412 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.034617 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerStarted","Data":"92b9831f290b04d0013bc0318c36c8ef1081a308ee1f6759b62245920ad2c43e"} Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.037909 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vv5qk" event={"ID":"b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3","Type":"ContainerDied","Data":"523e078e78e6cfb054a40a6916767e994deee00e08213d3cb61f49d65fa63001"} Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.037960 5115 scope.go:117] "RemoveContainer" containerID="16d160e92d5f6eb7e86089b3e9ed2b1d0541d36b9b9f8bf35054aecefda063d4" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.038042 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vv5qk" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.050882 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.050861249 podStartE2EDuration="4.050861249s" podCreationTimestamp="2026-01-20 09:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:07.049263696 +0000 UTC m=+177.218042276" watchObservedRunningTime="2026-01-20 09:11:07.050861249 +0000 UTC m=+177.219639779" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.055601 5115 scope.go:117] "RemoveContainer" containerID="099a58929bcd11d7806830d94c60b1c1e735c7d4ed3c769e2373744a991c063d" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.078871 5115 scope.go:117] "RemoveContainer" containerID="5c908a7c31ca720aadea8c8fd54b15fdf8ae8be43be8f76f2eb7b5413aeb74c6" Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.105452 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:11:07 crc kubenswrapper[5115]: I0120 09:11:07.109602 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vv5qk"] Jan 20 09:11:08 crc kubenswrapper[5115]: I0120 09:11:08.226761 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" path="/var/lib/kubelet/pods/b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3/volumes" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.332514 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.334873 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" containerID="cri-o://42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" gracePeriod=30 Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.356597 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.357722 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" containerID="cri-o://30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" gracePeriod=30 Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.856046 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886103 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886779 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886802 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886819 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886826 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886834 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886841 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886852 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886857 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886865 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886870 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886880 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886885 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886906 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886911 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886924 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886934 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="extract-content" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886943 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886950 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886964 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.886969 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="extract-utilities" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887075 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="098c57a3-a775-41d0-b528-6833df51eb70" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887089 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b758f72-1c19-45ea-8f26-580952f254a6" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887104 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3de3e2c-a8de-4bbc-a21f-286d9fd5f9a3" containerName="registry-server" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.887113 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerName="route-controller-manager" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.898802 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.899271 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904245 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904837 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904935 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.904644 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp" (OuterVolumeSpecName: "tmp") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.905019 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.905224 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") pod \"ea354490-c1e9-4cb2-a05e-2691aa628f04\" (UID: \"ea354490-c1e9-4cb2-a05e-2691aa628f04\") " Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.905957 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ea354490-c1e9-4cb2-a05e-2691aa628f04-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.906013 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca" (OuterVolumeSpecName: "client-ca") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.906048 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config" (OuterVolumeSpecName: "config") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.921597 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:15 crc kubenswrapper[5115]: I0120 09:11:15.928107 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75" (OuterVolumeSpecName: "kube-api-access-lvb75") pod "ea354490-c1e9-4cb2-a05e-2691aa628f04" (UID: "ea354490-c1e9-4cb2-a05e-2691aa628f04"). InnerVolumeSpecName "kube-api-access-lvb75". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007327 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007387 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007440 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007485 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007513 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007570 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007584 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea354490-c1e9-4cb2-a05e-2691aa628f04-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007595 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvb75\" (UniqueName: \"kubernetes.io/projected/ea354490-c1e9-4cb2-a05e-2691aa628f04-kube-api-access-lvb75\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.007606 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea354490-c1e9-4cb2-a05e-2691aa628f04-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.047000 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.077330 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.077968 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.077983 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.078097 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerName="controller-manager" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.085656 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.096354 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111302 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111389 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111432 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111467 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111579 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111610 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") pod \"c3904fe4-fb4d-4794-8d28-a76e420c437f\" (UID: \"c3904fe4-fb4d-4794-8d28-a76e420c437f\") " Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111937 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.111973 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.112038 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.112094 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.112124 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.114331 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.116694 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg" (OuterVolumeSpecName: "kube-api-access-g99kg") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "kube-api-access-g99kg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.116925 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.117373 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.117372 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config" (OuterVolumeSpecName: "config") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.117738 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp" (OuterVolumeSpecName: "tmp") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.118149 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.119343 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.119470 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.123164 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c3904fe4-fb4d-4794-8d28-a76e420c437f" (UID: "c3904fe4-fb4d-4794-8d28-a76e420c437f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.128926 5115 generic.go:358] "Generic (PLEG): container finished" podID="ea354490-c1e9-4cb2-a05e-2691aa628f04" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" exitCode=0 Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.128986 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerDied","Data":"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.129032 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" event={"ID":"ea354490-c1e9-4cb2-a05e-2691aa628f04","Type":"ContainerDied","Data":"ecc488089e2907ad65741a46b809cf94a5a4a9b7392b79f53726c2b0b4d5c94f"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.129052 5115 scope.go:117] "RemoveContainer" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.129395 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.130630 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"route-controller-manager-668cf4f594-bg2ms\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.132939 5115 generic.go:358] "Generic (PLEG): container finished" podID="c3904fe4-fb4d-4794-8d28-a76e420c437f" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" exitCode=0 Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.133223 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerDied","Data":"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.133306 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" event={"ID":"c3904fe4-fb4d-4794-8d28-a76e420c437f","Type":"ContainerDied","Data":"c797a19ebd1b6ba916e33ac8707e5acaa7f5238c9ba9c86fc08a09140acea056"} Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.133417 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c8c4494-gzm5q" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.148789 5115 scope.go:117] "RemoveContainer" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" Jan 20 09:11:16 crc kubenswrapper[5115]: E0120 09:11:16.149292 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be\": container with ID starting with 30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be not found: ID does not exist" containerID="30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.149340 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be"} err="failed to get container status \"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be\": rpc error: code = NotFound desc = could not find container \"30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be\": container with ID starting with 30daf23eb5d860a7b3832fd3f7b5708676ed9e22c115a461341e4ca19ed8c2be not found: ID does not exist" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.149363 5115 scope.go:117] "RemoveContainer" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.167925 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.171118 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbb47955d-p9csw"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.173098 5115 scope.go:117] "RemoveContainer" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" Jan 20 09:11:16 crc kubenswrapper[5115]: E0120 09:11:16.174440 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a\": container with ID starting with 42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a not found: ID does not exist" containerID="42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.174485 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a"} err="failed to get container status \"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a\": rpc error: code = NotFound desc = could not find container \"42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a\": container with ID starting with 42119b70d5dddd02d9f195c2192729a0ee39c8ce459b1cf112b279e95c1aab2a not found: ID does not exist" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.181504 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.184536 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-856c8c4494-gzm5q"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213220 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213603 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213712 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213796 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.213920 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214024 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214175 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c3904fe4-fb4d-4794-8d28-a76e420c437f-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214242 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214296 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214349 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3904fe4-fb4d-4794-8d28-a76e420c437f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214426 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g99kg\" (UniqueName: \"kubernetes.io/projected/c3904fe4-fb4d-4794-8d28-a76e420c437f-kube-api-access-g99kg\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.214500 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3904fe4-fb4d-4794-8d28-a76e420c437f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.222482 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.224952 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3904fe4-fb4d-4794-8d28-a76e420c437f" path="/var/lib/kubelet/pods/c3904fe4-fb4d-4794-8d28-a76e420c437f/volumes" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.225495 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea354490-c1e9-4cb2-a05e-2691aa628f04" path="/var/lib/kubelet/pods/ea354490-c1e9-4cb2-a05e-2691aa628f04/volumes" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.316162 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.316820 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.316985 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317138 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317312 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317446 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.317582 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.318236 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.318859 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.319072 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.323182 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.337708 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"controller-manager-5fb6cd4bfd-x5c9k\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.411796 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.617581 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:16 crc kubenswrapper[5115]: I0120 09:11:16.619841 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:16 crc kubenswrapper[5115]: W0120 09:11:16.630487 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod941ddcdd_0183_45d6_929e_e4138126657d.slice/crio-8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00 WatchSource:0}: Error finding container 8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00: Status 404 returned error can't find the container with id 8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00 Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.148491 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerStarted","Data":"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.148861 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerStarted","Data":"8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.148884 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.153387 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerStarted","Data":"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.153412 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerStarted","Data":"43d6fd31b5f6c85f09558bdd078897e3c86d6bae035ecb48d217d8449927c41f"} Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.153426 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.191722 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" podStartSLOduration=2.191696043 podStartE2EDuration="2.191696043s" podCreationTimestamp="2026-01-20 09:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:17.184548792 +0000 UTC m=+187.353327342" watchObservedRunningTime="2026-01-20 09:11:17.191696043 +0000 UTC m=+187.360474573" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.207959 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" podStartSLOduration=2.207930528 podStartE2EDuration="2.207930528s" podCreationTimestamp="2026-01-20 09:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:17.204144167 +0000 UTC m=+187.372922697" watchObservedRunningTime="2026-01-20 09:11:17.207930528 +0000 UTC m=+187.376709048" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.473826 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:17 crc kubenswrapper[5115]: I0120 09:11:17.569394 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:22 crc kubenswrapper[5115]: I0120 09:11:22.564556 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.386863 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.387925 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" containerID="cri-o://04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" gracePeriod=30 Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.404294 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.404628 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" containerID="cri-o://458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" gracePeriod=30 Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.910138 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.941329 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.942353 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.942376 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.942484 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerName="route-controller-manager" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.947735 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:35 crc kubenswrapper[5115]: I0120 09:11:35.950971 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.055647 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056204 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056315 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056353 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056379 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") pod \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\" (UID: \"0445ff5a-7f56-4085-98a2-35f8418fc9b5\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056556 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056599 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056634 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056639 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp" (OuterVolumeSpecName: "tmp") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056715 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config" (OuterVolumeSpecName: "config") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056829 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.056866 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.057034 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.057060 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0445ff5a-7f56-4085-98a2-35f8418fc9b5-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.057042 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.072727 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl" (OuterVolumeSpecName: "kube-api-access-gwjbl") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "kube-api-access-gwjbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.073623 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0445ff5a-7f56-4085-98a2-35f8418fc9b5" (UID: "0445ff5a-7f56-4085-98a2-35f8418fc9b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.153460 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158121 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158183 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158233 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158272 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158323 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158413 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwjbl\" (UniqueName: \"kubernetes.io/projected/0445ff5a-7f56-4085-98a2-35f8418fc9b5-kube-api-access-gwjbl\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158430 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0445ff5a-7f56-4085-98a2-35f8418fc9b5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.158443 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0445ff5a-7f56-4085-98a2-35f8418fc9b5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.159255 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.159471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.161043 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.164767 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.186553 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.186794 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"route-controller-manager-6b95c9954c-nvlzz\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.187151 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.187171 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.187285 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="941ddcdd-0183-45d6-929e-e4138126657d" containerName="controller-manager" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.193832 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.204306 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259367 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259463 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259543 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259586 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259691 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259756 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") pod \"941ddcdd-0183-45d6-929e-e4138126657d\" (UID: \"941ddcdd-0183-45d6-929e-e4138126657d\") " Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.259954 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260047 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260094 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260143 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260176 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp" (OuterVolumeSpecName: "tmp") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260231 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260364 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260455 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/941ddcdd-0183-45d6-929e-e4138126657d-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260455 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config" (OuterVolumeSpecName: "config") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.260953 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca" (OuterVolumeSpecName: "client-ca") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.261095 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.263518 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.263961 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.264857 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql" (OuterVolumeSpecName: "kube-api-access-pctql") pod "941ddcdd-0183-45d6-929e-e4138126657d" (UID: "941ddcdd-0183-45d6-929e-e4138126657d"). InnerVolumeSpecName "kube-api-access-pctql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.290580 5115 generic.go:358] "Generic (PLEG): container finished" podID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" exitCode=0 Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291164 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerDied","Data":"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291207 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291235 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms" event={"ID":"0445ff5a-7f56-4085-98a2-35f8418fc9b5","Type":"ContainerDied","Data":"43d6fd31b5f6c85f09558bdd078897e3c86d6bae035ecb48d217d8449927c41f"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.291282 5115 scope.go:117] "RemoveContainer" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293192 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293227 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerDied","Data":"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293388 5115 generic.go:358] "Generic (PLEG): container finished" podID="941ddcdd-0183-45d6-929e-e4138126657d" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" exitCode=0 Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.293690 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k" event={"ID":"941ddcdd-0183-45d6-929e-e4138126657d","Type":"ContainerDied","Data":"8c3a4761f527173e089965db8d66967c048e37252d38c580a1f92fdbc0252b00"} Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.323344 5115 scope.go:117] "RemoveContainer" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" Jan 20 09:11:36 crc kubenswrapper[5115]: E0120 09:11:36.324128 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308\": container with ID starting with 458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308 not found: ID does not exist" containerID="458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.324162 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308"} err="failed to get container status \"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308\": rpc error: code = NotFound desc = could not find container \"458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308\": container with ID starting with 458af259f686d47aa4a98aab2dd0cb4e40b1786ecacbd6440592284ca6834308 not found: ID does not exist" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.324185 5115 scope.go:117] "RemoveContainer" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.326400 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.328991 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668cf4f594-bg2ms"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.346785 5115 scope.go:117] "RemoveContainer" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.346860 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:36 crc kubenswrapper[5115]: E0120 09:11:36.347203 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220\": container with ID starting with 04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220 not found: ID does not exist" containerID="04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.347223 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220"} err="failed to get container status \"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220\": rpc error: code = NotFound desc = could not find container \"04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220\": container with ID starting with 04fdb76e4c3d5800656c6368715bd08cc2c5d4bfc4fafdc41c25304461f5b220 not found: ID does not exist" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.350654 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fb6cd4bfd-x5c9k"] Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361361 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361407 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361465 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361538 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361565 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361604 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361650 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361660 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361669 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ddcdd-0183-45d6-929e-e4138126657d-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361678 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pctql\" (UniqueName: \"kubernetes.io/projected/941ddcdd-0183-45d6-929e-e4138126657d-kube-api-access-pctql\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.361688 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ddcdd-0183-45d6-929e-e4138126657d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.365441 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.365686 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.366343 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.368644 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.369272 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.388303 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"controller-manager-6cb7c98cbc-lhp2l\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.519755 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.740692 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:11:36 crc kubenswrapper[5115]: W0120 09:11:36.745749 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbb2166_3ca6_40c1_8837_22587ad8df2e.slice/crio-368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8 WatchSource:0}: Error finding container 368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8: Status 404 returned error can't find the container with id 368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8 Jan 20 09:11:36 crc kubenswrapper[5115]: I0120 09:11:36.973637 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:11:36 crc kubenswrapper[5115]: W0120 09:11:36.979158 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0393a6_c76b_4bd6_9358_0314c2eca550.slice/crio-16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd WatchSource:0}: Error finding container 16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd: Status 404 returned error can't find the container with id 16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.302556 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerStarted","Data":"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.302621 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerStarted","Data":"368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.302812 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.306445 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerStarted","Data":"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.306507 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerStarted","Data":"16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd"} Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.306573 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.325116 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" podStartSLOduration=2.325093118 podStartE2EDuration="2.325093118s" podCreationTimestamp="2026-01-20 09:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:37.322540258 +0000 UTC m=+207.491318798" watchObservedRunningTime="2026-01-20 09:11:37.325093118 +0000 UTC m=+207.493871648" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.337759 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.348107 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" podStartSLOduration=2.348096755 podStartE2EDuration="2.348096755s" podCreationTimestamp="2026-01-20 09:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:11:37.346589544 +0000 UTC m=+207.515368124" watchObservedRunningTime="2026-01-20 09:11:37.348096755 +0000 UTC m=+207.516875285" Jan 20 09:11:37 crc kubenswrapper[5115]: I0120 09:11:37.387227 5115 ???:1] "http: TLS handshake error from 192.168.126.11:50856: no serving certificate available for the kubelet" Jan 20 09:11:38 crc kubenswrapper[5115]: I0120 09:11:38.132502 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:11:38 crc kubenswrapper[5115]: I0120 09:11:38.225666 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0445ff5a-7f56-4085-98a2-35f8418fc9b5" path="/var/lib/kubelet/pods/0445ff5a-7f56-4085-98a2-35f8418fc9b5/volumes" Jan 20 09:11:38 crc kubenswrapper[5115]: I0120 09:11:38.226623 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941ddcdd-0183-45d6-929e-e4138126657d" path="/var/lib/kubelet/pods/941ddcdd-0183-45d6-929e-e4138126657d/volumes" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.955807 5115 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.978873 5115 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.979120 5115 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.979298 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980322 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980627 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980740 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980807 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.980923 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5" gracePeriod=15 Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981065 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981128 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981158 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981174 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981193 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981277 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981306 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981369 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981475 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981499 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981518 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981584 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981658 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981682 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981702 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.981761 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982343 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982445 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982478 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982546 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982571 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982591 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982682 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.982757 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983259 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983339 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983462 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.983540 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.984053 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 20 09:11:43 crc kubenswrapper[5115]: I0120 09:11:43.991637 5115 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.044139 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.082996 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083205 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083232 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083253 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083276 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083335 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083365 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083476 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083532 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.083565 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.184871 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.184997 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185045 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185081 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185115 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185143 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185224 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185267 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185303 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185362 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185841 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.185973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186012 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186043 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186075 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186480 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186527 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186549 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.186561 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.263580 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.264089 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.264589 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.265280 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.265551 5115 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.265585 5115 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.265805 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.356970 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.359280 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360220 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360273 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360291 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360311 5115 scope.go:117] "RemoveContainer" containerID="b33a6c20a19b1f41a4ad2db77e01a63543fbc61ea2e64cd19c7c6530bae76c3b" Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.360318 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5" exitCode=2 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.362755 5115 generic.go:358] "Generic (PLEG): container finished" podID="128ab750-3574-4f36-a27e-5bddc737a52d" containerID="92b9831f290b04d0013bc0318c36c8ef1081a308ee1f6759b62245920ad2c43e" exitCode=0 Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.362890 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerDied","Data":"92b9831f290b04d0013bc0318c36c8ef1081a308ee1f6759b62245920ad2c43e"} Jan 20 09:11:44 crc kubenswrapper[5115]: I0120 09:11:44.364184 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.466755 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Jan 20 09:11:44 crc kubenswrapper[5115]: E0120 09:11:44.868309 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.378700 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:45 crc kubenswrapper[5115]: E0120 09:11:45.669972 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.772613 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.773942 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925782 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") pod \"128ab750-3574-4f36-a27e-5bddc737a52d\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925862 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") pod \"128ab750-3574-4f36-a27e-5bddc737a52d\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925944 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") pod \"128ab750-3574-4f36-a27e-5bddc737a52d\" (UID: \"128ab750-3574-4f36-a27e-5bddc737a52d\") " Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.925989 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "128ab750-3574-4f36-a27e-5bddc737a52d" (UID: "128ab750-3574-4f36-a27e-5bddc737a52d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.926070 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock" (OuterVolumeSpecName: "var-lock") pod "128ab750-3574-4f36-a27e-5bddc737a52d" (UID: "128ab750-3574-4f36-a27e-5bddc737a52d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.926560 5115 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.926576 5115 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/128ab750-3574-4f36-a27e-5bddc737a52d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:45 crc kubenswrapper[5115]: I0120 09:11:45.939524 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "128ab750-3574-4f36-a27e-5bddc737a52d" (UID: "128ab750-3574-4f36-a27e-5bddc737a52d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.028684 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/128ab750-3574-4f36-a27e-5bddc737a52d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.390511 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.391856 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c" exitCode=0 Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.392096 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c434758e6e9146827245a5ae9ad4f26779e19f2474d8e2ec2f6da8ef3ada11b" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.393804 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"128ab750-3574-4f36-a27e-5bddc737a52d","Type":"ContainerDied","Data":"2f73af6d69f6c232d9d9d0a495fca6672d15d9b3c8a84a1c612e0ef514970d06"} Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.393863 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f73af6d69f6c232d9d9d0a495fca6672d15d9b3c8a84a1c612e0ef514970d06" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.394045 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.395820 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.396698 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.397991 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.398241 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.398573 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.398806 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433522 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433623 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433712 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433721 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433768 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433782 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433793 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.433844 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434312 5115 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434348 5115 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434365 5115 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.434854 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.436754 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.535648 5115 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:46 crc kubenswrapper[5115]: I0120 09:11:46.535705 5115 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:47 crc kubenswrapper[5115]: E0120 09:11:47.271240 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.399020 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.431310 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.431717 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:47 crc kubenswrapper[5115]: I0120 09:11:47.606496 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" containerID="cri-o://cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" gracePeriod=15 Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.161212 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.162745 5115 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.163702 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.164456 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.230401 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.265812 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.265950 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.266004 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.267150 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.267257 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.266043 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.267990 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268120 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268192 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268317 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268440 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268531 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268716 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268768 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268831 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.268998 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") pod \"73f78db9-bab5-49ee-84a4-9f0825efca8a\" (UID: \"73f78db9-bab5-49ee-84a4-9f0825efca8a\") " Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269040 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269182 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269941 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.269993 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.270019 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.270045 5115 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.270071 5115 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/73f78db9-bab5-49ee-84a4-9f0825efca8a-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.279186 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj" (OuterVolumeSpecName: "kube-api-access-2pzbj") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "kube-api-access-2pzbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.279287 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.280413 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.280846 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.281762 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.282284 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.282736 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.283297 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.283713 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "73f78db9-bab5-49ee-84a4-9f0825efca8a" (UID: "73f78db9-bab5-49ee-84a4-9f0825efca8a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.371981 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372063 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372114 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372135 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372219 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372240 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372292 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372311 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2pzbj\" (UniqueName: \"kubernetes.io/projected/73f78db9-bab5-49ee-84a4-9f0825efca8a-kube-api-access-2pzbj\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.372329 5115 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/73f78db9-bab5-49ee-84a4-9f0825efca8a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409585 5115 generic.go:358] "Generic (PLEG): container finished" podID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" exitCode=0 Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409755 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409787 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerDied","Data":"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292"} Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409826 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" event={"ID":"73f78db9-bab5-49ee-84a4-9f0825efca8a","Type":"ContainerDied","Data":"41ea8c623ecacb84e93a0bb70429c6d21f2263332366f0ca16d5017167557e81"} Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.409847 5115 scope.go:117] "RemoveContainer" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.410621 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.410821 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.435236 5115 scope.go:117] "RemoveContainer" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" Jan 20 09:11:48 crc kubenswrapper[5115]: E0120 09:11:48.435956 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292\": container with ID starting with cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292 not found: ID does not exist" containerID="cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.436031 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292"} err="failed to get container status \"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292\": rpc error: code = NotFound desc = could not find container \"cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292\": container with ID starting with cd61efcab514cc481b8abf90fad1504f795c14ca967ea45686ed74a313ace292 not found: ID does not exist" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.442044 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:48 crc kubenswrapper[5115]: I0120 09:11:48.442690 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:49 crc kubenswrapper[5115]: E0120 09:11:49.047704 5115 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:49 crc kubenswrapper[5115]: I0120 09:11:49.048396 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:49 crc kubenswrapper[5115]: E0120 09:11:49.090067 5115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c657586494e4c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,LastTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:11:49 crc kubenswrapper[5115]: I0120 09:11:49.423164 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"f880c11b80dde2953894863f4663242621b5298262f11f219e74f37d19d8d8c4"} Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.219948 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.220767 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.436656 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247"} Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.436945 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.437400 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: E0120 09:11:50.437439 5115 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:50 crc kubenswrapper[5115]: I0120 09:11:50.437696 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:50 crc kubenswrapper[5115]: E0120 09:11:50.472208 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="6.4s" Jan 20 09:11:51 crc kubenswrapper[5115]: I0120 09:11:51.446535 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:51 crc kubenswrapper[5115]: E0120 09:11:51.447394 5115 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:11:52 crc kubenswrapper[5115]: E0120 09:11:52.599947 5115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c657586494e4c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,LastTimestamp:2026-01-20 09:11:49.088292428 +0000 UTC m=+219.257070988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 09:11:56 crc kubenswrapper[5115]: E0120 09:11:56.873813 5115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="7s" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.517381 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.517474 5115 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447" exitCode=1 Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.517527 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447"} Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.518405 5115 scope.go:117] "RemoveContainer" containerID="cee213223198b5e3642cdac2764daeb64bf20128377548aa985feafed2a3d447" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.518909 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.519593 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:58 crc kubenswrapper[5115]: I0120 09:11:58.520045 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.217518 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.219475 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.220197 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.220771 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.236071 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.236120 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:11:59 crc kubenswrapper[5115]: E0120 09:11:59.236797 5115 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.237329 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:11:59 crc kubenswrapper[5115]: W0120 09:11:59.260717 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0 WatchSource:0}: Error finding container b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0: Status 404 returned error can't find the container with id b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0 Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.545510 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b1267ebe36fc4bca812eb426f0968d81d225f1a5a4da6bad5112b70419b7c6c0"} Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.550645 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.551047 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d52c89544587359b9809e7538e1334a5902e517df87226da8b50b669ba88e727"} Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.552218 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.552593 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:11:59 crc kubenswrapper[5115]: I0120 09:11:59.553012 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.236642 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.237519 5115 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.238139 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.238526 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.312039 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.566517 5115 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="130c63fa2a2cbc202bdebd1ad19f2a89021c9e25f31c646f25e6d24d2fda1d10" exitCode=0 Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.566627 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"130c63fa2a2cbc202bdebd1ad19f2a89021c9e25f31c646f25e6d24d2fda1d10"} Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.567345 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.568033 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.568106 5115 status_manager.go:895] "Failed to get status for pod" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" pod="openshift-authentication/oauth-openshift-66458b6674-c88bx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c88bx\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: E0120 09:12:00.568745 5115 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.568801 5115 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.569445 5115 status_manager.go:895] "Failed to get status for pod" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:00 crc kubenswrapper[5115]: I0120 09:12:00.570015 5115 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 20 09:12:01 crc kubenswrapper[5115]: I0120 09:12:01.582528 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6c43dcdb283f3aa5109a0fc20e7f80d16f4889cfdfa6b195593fcb5764f51caf"} Jan 20 09:12:01 crc kubenswrapper[5115]: I0120 09:12:01.583076 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"63077e0dbd463f50e32ddcd38c795763f097acc01ad341160025ace225579c96"} Jan 20 09:12:01 crc kubenswrapper[5115]: I0120 09:12:01.583096 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d252061aed5999272626544b964803b9e3f1e7313dfb41b17be61902d46b66ef"} Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.597463 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"00deeca7107d97e93b957a8f41ee4451022c262f5c4bed7b87afa4cf4f77ebcf"} Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598001 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598025 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c658a796eeda2ba4ece7dce49af08bbbb29572226fb175cea183c3f2b4286a0e"} Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598142 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:02 crc kubenswrapper[5115]: I0120 09:12:02.598178 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:04 crc kubenswrapper[5115]: I0120 09:12:04.237743 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:04 crc kubenswrapper[5115]: I0120 09:12:04.238124 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:04 crc kubenswrapper[5115]: I0120 09:12:04.245495 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:07 crc kubenswrapper[5115]: I0120 09:12:07.819144 5115 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:07 crc kubenswrapper[5115]: I0120 09:12:07.819495 5115 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:07 crc kubenswrapper[5115]: I0120 09:12:07.896755 5115 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="1ac6b65b-44a1-4768-aa23-062028f72cae" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.482766 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.482845 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.638488 5115 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.638538 5115 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5125ab95-d5cf-48ad-a899-3add343eaeba" Jan 20 09:12:08 crc kubenswrapper[5115]: I0120 09:12:08.643775 5115 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="1ac6b65b-44a1-4768-aa23-062028f72cae" Jan 20 09:12:09 crc kubenswrapper[5115]: I0120 09:12:09.300495 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:09 crc kubenswrapper[5115]: I0120 09:12:09.307813 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:09 crc kubenswrapper[5115]: I0120 09:12:09.656538 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 09:12:17 crc kubenswrapper[5115]: I0120 09:12:17.870047 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.112359 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.271731 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.488206 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 20 09:12:18 crc kubenswrapper[5115]: I0120 09:12:18.914860 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.063408 5115 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.094837 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.280293 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.314485 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.431924 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.598377 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 20 09:12:19 crc kubenswrapper[5115]: I0120 09:12:19.690812 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.129248 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.135939 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.246243 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.313358 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.576106 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.686278 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.699570 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.713534 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.811961 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.895853 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.916557 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.981278 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 20 09:12:20 crc kubenswrapper[5115]: I0120 09:12:20.983600 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.079319 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.115537 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.180622 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.275996 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.370791 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.446478 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.550398 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.551332 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.552141 5115 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.711660 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.726809 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.727133 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.787341 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.899997 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.923357 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.934760 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:12:21 crc kubenswrapper[5115]: I0120 09:12:21.998863 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.111815 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.115136 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.158817 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.215001 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.257475 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.282250 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.316255 5115 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.325385 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c88bx","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.325487 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.337111 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.337953 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.366104 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.366080096 podStartE2EDuration="15.366080096s" podCreationTimestamp="2026-01-20 09:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:22.362305453 +0000 UTC m=+252.531084023" watchObservedRunningTime="2026-01-20 09:12:22.366080096 +0000 UTC m=+252.534858636" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.419833 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.642405 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.660881 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.693013 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.696800 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.713017 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.730827 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.791585 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.882367 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.925751 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 20 09:12:22 crc kubenswrapper[5115]: I0120 09:12:22.926219 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.099877 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.177930 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.209542 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.250127 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.292114 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.459244 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.481193 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.523473 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.557037 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.685375 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.688243 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.795945 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.876061 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.886935 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.887661 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.892992 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.904739 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.927432 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.974129 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:23 crc kubenswrapper[5115]: I0120 09:12:23.999732 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.007397 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.021427 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.023142 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.107092 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.119395 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.129581 5115 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.184181 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.192773 5115 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.239802 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" path="/var/lib/kubelet/pods/73f78db9-bab5-49ee-84a4-9f0825efca8a/volumes" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.322596 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.334109 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.360239 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.408458 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.450584 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.512186 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.545238 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.618727 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.633257 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.702634 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.722697 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.731118 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.754378 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.791454 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.806015 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.848282 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.881872 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:24 crc kubenswrapper[5115]: I0120 09:12:24.979953 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.116201 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.116252 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.138190 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.299315 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.412976 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.873071 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.875541 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.875765 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.951870 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 20 09:12:25 crc kubenswrapper[5115]: I0120 09:12:25.965157 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.040958 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.143367 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.199505 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.218954 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.278378 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.315100 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.339516 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.343164 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.346812 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.460736 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.557254 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.592038 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.592108 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.678948 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.719001 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.731378 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.880311 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:26 crc kubenswrapper[5115]: I0120 09:12:26.985551 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.024547 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.030092 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.058922 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.061702 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.119559 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.214466 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.228767 5115 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.306791 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.358598 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.378479 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.484526 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.611931 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.684362 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.749255 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.761787 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d5c987897-r9s5c"] Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762807 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762842 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762888 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" containerName="installer" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.762928 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" containerName="installer" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.763103 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="73f78db9-bab5-49ee-84a4-9f0825efca8a" containerName="oauth-openshift" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.763134 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="128ab750-3574-4f36-a27e-5bddc737a52d" containerName="installer" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.792641 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.797651 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.797670 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.797750 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.798263 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.798276 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.798679 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.799848 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.800013 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.800034 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806158 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806434 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806167 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.806883 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.811979 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.812868 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.816343 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.820872 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856205 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856279 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-session\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856311 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856372 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856454 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-error\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856525 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856568 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856600 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-login\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856642 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856682 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55xd\" (UniqueName: \"kubernetes.io/projected/8645f26f-7d64-4135-94fe-7b89b8f4484a-kube-api-access-q55xd\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856821 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856915 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-policies\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.856960 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.857003 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-dir\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.891109 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.891548 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959231 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959308 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q55xd\" (UniqueName: \"kubernetes.io/projected/8645f26f-7d64-4135-94fe-7b89b8f4484a-kube-api-access-q55xd\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959384 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959465 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-policies\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959505 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959555 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-dir\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959639 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959752 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-dir\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959814 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-session\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959858 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.959968 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960014 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-error\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960077 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960125 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.960163 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-login\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.962107 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-audit-policies\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.963392 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.963419 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.964972 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.967829 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-session\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.967867 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.967881 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-login\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.970805 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.971288 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.971369 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-error\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.973046 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.973582 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8645f26f-7d64-4135-94fe-7b89b8f4484a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:27 crc kubenswrapper[5115]: I0120 09:12:27.984147 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55xd\" (UniqueName: \"kubernetes.io/projected/8645f26f-7d64-4135-94fe-7b89b8f4484a-kube-api-access-q55xd\") pod \"oauth-openshift-d5c987897-r9s5c\" (UID: \"8645f26f-7d64-4135-94fe-7b89b8f4484a\") " pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.001304 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.130327 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.219321 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.342165 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.343970 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.432635 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.576777 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.612021 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.678365 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.788417 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 20 09:12:28 crc kubenswrapper[5115]: I0120 09:12:28.988801 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.090494 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.205007 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.214495 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.284052 5115 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.284446 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" gracePeriod=5 Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.415247 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.445549 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.515960 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.591589 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.636781 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.653832 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.668382 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.770417 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.781260 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.802070 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.885745 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.925351 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:29 crc kubenswrapper[5115]: I0120 09:12:29.927839 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.080094 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.115385 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.164202 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.245519 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.304020 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.314143 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.397993 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.404591 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.410387 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.567543 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.608379 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.685101 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.726573 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.785134 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 20 09:12:30 crc kubenswrapper[5115]: I0120 09:12:30.818844 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.101885 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.120205 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.278078 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.300940 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.427017 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.628312 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.711060 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.828625 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.891999 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d5c987897-r9s5c"] Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.894137 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 20 09:12:31 crc kubenswrapper[5115]: I0120 09:12:31.963330 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.097165 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.211386 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.355111 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.439252 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.482240 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.570543 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.826777 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.831515 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" event={"ID":"8645f26f-7d64-4135-94fe-7b89b8f4484a","Type":"ContainerStarted","Data":"2a6d38ea66188b4dc9fbd34e1083c3ee3c881d72f7487cad89f80c82aacad543"} Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.831569 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" event={"ID":"8645f26f-7d64-4135-94fe-7b89b8f4484a","Type":"ContainerStarted","Data":"3bf21765f71fe46f8bd1ca0017ec2ac3c4e1755182d9b057882d8e552348a522"} Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.831946 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.861359 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" podStartSLOduration=70.861333763 podStartE2EDuration="1m10.861333763s" podCreationTimestamp="2026-01-20 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:32.859745248 +0000 UTC m=+263.028523878" watchObservedRunningTime="2026-01-20 09:12:32.861333763 +0000 UTC m=+263.030112333" Jan 20 09:12:32 crc kubenswrapper[5115]: I0120 09:12:32.957614 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.036072 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.045632 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.117047 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.142871 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.238045 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d5c987897-r9s5c" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.353441 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.397162 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.604283 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 20 09:12:33 crc kubenswrapper[5115]: I0120 09:12:33.922713 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.435182 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.435355 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.438181 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.573983 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574152 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574201 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574276 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574361 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574462 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574491 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574560 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.574530 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575433 5115 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575470 5115 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575488 5115 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.575507 5115 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.587740 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.677060 5115 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919394 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919485 5115 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" exitCode=137 Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919658 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.919701 5115 scope.go:117] "RemoveContainer" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.952388 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.954813 5115 scope.go:117] "RemoveContainer" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" Jan 20 09:12:34 crc kubenswrapper[5115]: E0120 09:12:34.955471 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247\": container with ID starting with 4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247 not found: ID does not exist" containerID="4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247" Jan 20 09:12:34 crc kubenswrapper[5115]: I0120 09:12:34.955524 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247"} err="failed to get container status \"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247\": rpc error: code = NotFound desc = could not find container \"4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247\": container with ID starting with 4d6b5f4076d96a3976239b71de54fa0176dcbdda361c4d53976d86a2e687e247 not found: ID does not exist" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.393277 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.393595 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" containerID="cri-o://f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" gracePeriod=30 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.404441 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.418589 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.418986 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" containerID="cri-o://694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" gracePeriod=30 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.424783 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.888149 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.891999 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.905361 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.915537 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916158 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916175 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916192 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916198 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916214 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916221 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916324 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerName="route-controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916333 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerName="controller-manager" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.916342 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.919602 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926393 5115 generic.go:358] "Generic (PLEG): container finished" podID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" exitCode=0 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926484 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926602 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerDied","Data":"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926648 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz" event={"ID":"6dbb2166-3ca6-40c1-8837-22587ad8df2e","Type":"ContainerDied","Data":"368a735da1f99fc4138c761b29484fa6a4c95fa01e8ee82b62c23cf95bf3f7b8"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.926682 5115 scope.go:117] "RemoveContainer" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931678 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931729 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931757 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931800 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931863 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931912 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931939 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.931963 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932033 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932077 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") pod \"0e0393a6-c76b-4bd6-9358-0314c2eca550\" (UID: \"0e0393a6-c76b-4bd6-9358-0314c2eca550\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932165 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") pod \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\" (UID: \"6dbb2166-3ca6-40c1-8837-22587ad8df2e\") " Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932313 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932401 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932438 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932459 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.932515 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.934710 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.934872 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp" (OuterVolumeSpecName: "tmp") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.937171 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config" (OuterVolumeSpecName: "config") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938058 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp" (OuterVolumeSpecName: "tmp") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938127 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938260 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s" (OuterVolumeSpecName: "kube-api-access-fr57s") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "kube-api-access-fr57s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938431 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938495 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca" (OuterVolumeSpecName: "client-ca") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.938695 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config" (OuterVolumeSpecName: "config") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.939652 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.941481 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.941876 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6dbb2166-3ca6-40c1-8837-22587ad8df2e" (UID: "6dbb2166-3ca6-40c1-8837-22587ad8df2e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.943099 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k" (OuterVolumeSpecName: "kube-api-access-k8q8k") pod "0e0393a6-c76b-4bd6-9358-0314c2eca550" (UID: "0e0393a6-c76b-4bd6-9358-0314c2eca550"). InnerVolumeSpecName "kube-api-access-k8q8k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946011 5115 generic.go:358] "Generic (PLEG): container finished" podID="0e0393a6-c76b-4bd6-9358-0314c2eca550" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" exitCode=0 Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946214 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerDied","Data":"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946245 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" event={"ID":"0e0393a6-c76b-4bd6-9358-0314c2eca550","Type":"ContainerDied","Data":"16f00ae2e909bdbab9e9f0bb68dfa4c4d6e9c21c455eefd3d26a54cf17f6d6dd"} Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946349 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.946421 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.950753 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.957824 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.964426 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968332 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968550 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968676 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968808 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.968949 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.969110 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.969683 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.976351 5115 scope.go:117] "RemoveContainer" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" Jan 20 09:12:35 crc kubenswrapper[5115]: E0120 09:12:35.983045 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea\": container with ID starting with 694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea not found: ID does not exist" containerID="694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.983091 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea"} err="failed to get container status \"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea\": rpc error: code = NotFound desc = could not find container \"694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea\": container with ID starting with 694c50e214a14f27a0eff68145449372cab7bc07d76f0814f74f905d81efe8ea not found: ID does not exist" Jan 20 09:12:35 crc kubenswrapper[5115]: I0120 09:12:35.983121 5115 scope.go:117] "RemoveContainer" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.020145 5115 scope.go:117] "RemoveContainer" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" Jan 20 09:12:36 crc kubenswrapper[5115]: E0120 09:12:36.021084 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026\": container with ID starting with f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026 not found: ID does not exist" containerID="f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.021120 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026"} err="failed to get container status \"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026\": rpc error: code = NotFound desc = could not find container \"f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026\": container with ID starting with f9b27559a5e47c9d1a60fe2eb29dd7b4059fb320392737d90d53647d18545026 not found: ID does not exist" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.033947 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.033990 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034016 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034032 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034048 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034095 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034127 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034146 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034183 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034208 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034228 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034278 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fr57s\" (UniqueName: \"kubernetes.io/projected/6dbb2166-3ca6-40c1-8837-22587ad8df2e-kube-api-access-fr57s\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034290 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k8q8k\" (UniqueName: \"kubernetes.io/projected/0e0393a6-c76b-4bd6-9358-0314c2eca550-kube-api-access-k8q8k\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034298 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0e0393a6-c76b-4bd6-9358-0314c2eca550-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034307 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034316 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034324 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6dbb2166-3ca6-40c1-8837-22587ad8df2e-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034333 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbb2166-3ca6-40c1-8837-22587ad8df2e-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034340 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034349 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dbb2166-3ca6-40c1-8837-22587ad8df2e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034357 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e0393a6-c76b-4bd6-9358-0314c2eca550-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034366 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e0393a6-c76b-4bd6-9358-0314c2eca550-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.034731 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.035598 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.037471 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.038197 5115 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.043332 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.047958 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.050069 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7c98cbc-lhp2l"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.056414 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"route-controller-manager-6cd84fb898-9bd7b\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135597 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135761 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135813 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.135953 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.136045 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.136094 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.136413 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.137253 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.137540 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.139034 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.140561 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.152922 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"controller-manager-64f6849bcb-56vwt\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.226639 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0393a6-c76b-4bd6-9358-0314c2eca550" path="/var/lib/kubelet/pods/0e0393a6-c76b-4bd6-9358-0314c2eca550/volumes" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.227386 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.239131 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.277237 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.288196 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.294142 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b95c9954c-nvlzz"] Jan 20 09:12:36 crc kubenswrapper[5115]: I0120 09:12:36.425731 5115 ???:1] "http: TLS handshake error from 192.168.126.11:53218: no serving certificate available for the kubelet" Jan 20 09:12:38 crc kubenswrapper[5115]: I0120 09:12:38.226165 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbb2166-3ca6-40c1-8837-22587ad8df2e" path="/var/lib/kubelet/pods/6dbb2166-3ca6-40c1-8837-22587ad8df2e/volumes" Jan 20 09:12:38 crc kubenswrapper[5115]: I0120 09:12:38.482965 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:12:38 crc kubenswrapper[5115]: I0120 09:12:38.483152 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:12:39 crc kubenswrapper[5115]: W0120 09:12:39.032876 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a019ddb_06f4_46e8_b51d_4ff472d661f7.slice/crio-c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f WatchSource:0}: Error finding container c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f: Status 404 returned error can't find the container with id c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f Jan 20 09:12:39 crc kubenswrapper[5115]: W0120 09:12:39.092814 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40317894_58cf_4fd9_bbfe_0338895305fb.slice/crio-ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c WatchSource:0}: Error finding container ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c: Status 404 returned error can't find the container with id ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.982755 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerStarted","Data":"d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.983123 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.983133 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerStarted","Data":"ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.985388 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerStarted","Data":"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.985415 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerStarted","Data":"c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f"} Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.985787 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:39 crc kubenswrapper[5115]: I0120 09:12:39.994170 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:40 crc kubenswrapper[5115]: I0120 09:12:40.011731 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" podStartSLOduration=5.011708253 podStartE2EDuration="5.011708253s" podCreationTimestamp="2026-01-20 09:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:40.004988912 +0000 UTC m=+270.173767482" watchObservedRunningTime="2026-01-20 09:12:40.011708253 +0000 UTC m=+270.180486793" Jan 20 09:12:40 crc kubenswrapper[5115]: I0120 09:12:40.026611 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" podStartSLOduration=5.026575816 podStartE2EDuration="5.026575816s" podCreationTimestamp="2026-01-20 09:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:40.020402031 +0000 UTC m=+270.189180571" watchObservedRunningTime="2026-01-20 09:12:40.026575816 +0000 UTC m=+270.195354346" Jan 20 09:12:40 crc kubenswrapper[5115]: I0120 09:12:40.234353 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:41 crc kubenswrapper[5115]: I0120 09:12:41.948027 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 20 09:12:45 crc kubenswrapper[5115]: I0120 09:12:45.728505 5115 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9gfdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 20 09:12:45 crc kubenswrapper[5115]: I0120 09:12:45.728949 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 20 09:12:46 crc kubenswrapper[5115]: I0120 09:12:46.034257 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerDied","Data":"875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2"} Jan 20 09:12:46 crc kubenswrapper[5115]: I0120 09:12:46.034175 5115 generic.go:358] "Generic (PLEG): container finished" podID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerID="875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2" exitCode=0 Jan 20 09:12:46 crc kubenswrapper[5115]: I0120 09:12:46.034882 5115 scope.go:117] "RemoveContainer" containerID="875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2" Jan 20 09:12:47 crc kubenswrapper[5115]: I0120 09:12:47.043622 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerStarted","Data":"fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb"} Jan 20 09:12:47 crc kubenswrapper[5115]: I0120 09:12:47.044796 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:12:47 crc kubenswrapper[5115]: I0120 09:12:47.048423 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:12:48 crc kubenswrapper[5115]: I0120 09:12:48.084415 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 20 09:12:49 crc kubenswrapper[5115]: I0120 09:12:49.906415 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 20 09:12:51 crc kubenswrapper[5115]: I0120 09:12:51.228484 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 20 09:12:51 crc kubenswrapper[5115]: I0120 09:12:51.658213 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 20 09:12:52 crc kubenswrapper[5115]: I0120 09:12:52.512215 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 20 09:12:53 crc kubenswrapper[5115]: I0120 09:12:53.626958 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:12:54 crc kubenswrapper[5115]: I0120 09:12:54.134639 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.370887 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.371135 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" containerID="cri-o://d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4" gracePeriod=30 Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.397416 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.398059 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" containerID="cri-o://a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" gracePeriod=30 Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.946967 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.977805 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.979122 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.979151 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.979299 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerName="route-controller-manager" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.983333 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:55 crc kubenswrapper[5115]: I0120 09:12:55.992989 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046627 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046675 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046763 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046789 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.046804 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") pod \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\" (UID: \"3a019ddb-06f4-46e8-b51d-4ff472d661f7\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.048391 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config" (OuterVolumeSpecName: "config") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.048465 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.049116 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp" (OuterVolumeSpecName: "tmp") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.053071 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4" (OuterVolumeSpecName: "kube-api-access-svdp4") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "kube-api-access-svdp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.053086 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a019ddb-06f4-46e8-b51d-4ff472d661f7" (UID: "3a019ddb-06f4-46e8-b51d-4ff472d661f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112727 5115 generic.go:358] "Generic (PLEG): container finished" podID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" exitCode=0 Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112816 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerDied","Data":"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506"} Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112833 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112862 5115 scope.go:117] "RemoveContainer" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.112851 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b" event={"ID":"3a019ddb-06f4-46e8-b51d-4ff472d661f7","Type":"ContainerDied","Data":"c141319af3b1f534df3ecc8828deaf51aa01a1c31d459530b0f3e2eb484ddb7f"} Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.114388 5115 generic.go:358] "Generic (PLEG): container finished" podID="40317894-58cf-4fd9-bbfe-0338895305fb" containerID="d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4" exitCode=0 Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.114470 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerDied","Data":"d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4"} Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.138985 5115 scope.go:117] "RemoveContainer" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" Jan 20 09:12:56 crc kubenswrapper[5115]: E0120 09:12:56.139392 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506\": container with ID starting with a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506 not found: ID does not exist" containerID="a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.139436 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506"} err="failed to get container status \"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506\": rpc error: code = NotFound desc = could not find container \"a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506\": container with ID starting with a3914ab4f691e634c5241cfd2dd62d61db474c12c2855c3a15a3e9e6d1375506 not found: ID does not exist" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.142174 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148626 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148776 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148880 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.148970 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149030 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149085 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149101 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a019ddb-06f4-46e8-b51d-4ff472d661f7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149113 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a019ddb-06f4-46e8-b51d-4ff472d661f7-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149125 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a019ddb-06f4-46e8-b51d-4ff472d661f7-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.149136 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-svdp4\" (UniqueName: \"kubernetes.io/projected/3a019ddb-06f4-46e8-b51d-4ff472d661f7-kube-api-access-svdp4\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.160266 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.168148 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-9bd7b"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.189418 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.190747 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.190795 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.191102 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" containerName="controller-manager" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.200106 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.200296 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.233774 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a019ddb-06f4-46e8-b51d-4ff472d661f7" path="/var/lib/kubelet/pods/3a019ddb-06f4-46e8-b51d-4ff472d661f7/volumes" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250465 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250526 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250612 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250654 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250714 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.250799 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") pod \"40317894-58cf-4fd9-bbfe-0338895305fb\" (UID: \"40317894-58cf-4fd9-bbfe-0338895305fb\") " Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251046 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251096 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251131 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251164 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251203 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251652 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.252315 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config" (OuterVolumeSpecName: "config") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.252608 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.252668 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.251819 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.253586 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp" (OuterVolumeSpecName: "tmp") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.253908 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca" (OuterVolumeSpecName: "client-ca") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.256443 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2" (OuterVolumeSpecName: "kube-api-access-vwtk2") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "kube-api-access-vwtk2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.256991 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "40317894-58cf-4fd9-bbfe-0338895305fb" (UID: "40317894-58cf-4fd9-bbfe-0338895305fb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.257941 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.268562 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"route-controller-manager-fd648b944-86lpr\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.310469 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352646 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352718 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352747 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352789 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352810 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352840 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352885 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352943 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352958 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwtk2\" (UniqueName: \"kubernetes.io/projected/40317894-58cf-4fd9-bbfe-0338895305fb-kube-api-access-vwtk2\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352972 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40317894-58cf-4fd9-bbfe-0338895305fb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352983 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/40317894-58cf-4fd9-bbfe-0338895305fb-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.352993 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40317894-58cf-4fd9-bbfe-0338895305fb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.453977 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454475 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454505 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454547 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454570 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.454598 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.455064 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.455294 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.456074 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.459445 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.466055 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.477607 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"controller-manager-5498596948-x8xdh\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.515735 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.718964 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:12:56 crc kubenswrapper[5115]: W0120 09:12:56.722676 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008b7b41_90a9_4871_a024_a4a8736d5239.slice/crio-f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51 WatchSource:0}: Error finding container f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51: Status 404 returned error can't find the container with id f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51 Jan 20 09:12:56 crc kubenswrapper[5115]: I0120 09:12:56.920786 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:12:56 crc kubenswrapper[5115]: W0120 09:12:56.929074 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e383ce_519b_41ba_8dda_d0d71e14316e.slice/crio-c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615 WatchSource:0}: Error finding container c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615: Status 404 returned error can't find the container with id c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615 Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.121721 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerStarted","Data":"3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.121773 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerStarted","Data":"c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.123180 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.124468 5115 patch_prober.go:28] interesting pod/controller-manager-5498596948-x8xdh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.124515 5115 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.125104 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" event={"ID":"40317894-58cf-4fd9-bbfe-0338895305fb","Type":"ContainerDied","Data":"ed37d8ccf346c5ab6e9a1610b0705d4a9b425c2e5508f8d656d7c2c9e96a8a2c"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.125131 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-56vwt" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.125148 5115 scope.go:117] "RemoveContainer" containerID="d0381859b81111be73fed33e571215f4eb400274eea60f9124171aa0fdfea2b4" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.127083 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerStarted","Data":"487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.127108 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerStarted","Data":"f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51"} Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.127307 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.147592 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" podStartSLOduration=2.147571675 podStartE2EDuration="2.147571675s" podCreationTimestamp="2026-01-20 09:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:57.144304882 +0000 UTC m=+287.313083422" watchObservedRunningTime="2026-01-20 09:12:57.147571675 +0000 UTC m=+287.316350215" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.176916 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" podStartSLOduration=2.176863127 podStartE2EDuration="2.176863127s" podCreationTimestamp="2026-01-20 09:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:12:57.17309336 +0000 UTC m=+287.341871900" watchObservedRunningTime="2026-01-20 09:12:57.176863127 +0000 UTC m=+287.345641677" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.189373 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.194212 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-56vwt"] Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.563198 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:12:57 crc kubenswrapper[5115]: I0120 09:12:57.682566 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 20 09:12:58 crc kubenswrapper[5115]: I0120 09:12:58.148832 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:12:58 crc kubenswrapper[5115]: I0120 09:12:58.227479 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40317894-58cf-4fd9-bbfe-0338895305fb" path="/var/lib/kubelet/pods/40317894-58cf-4fd9-bbfe-0338895305fb/volumes" Jan 20 09:12:59 crc kubenswrapper[5115]: I0120 09:12:59.240885 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 20 09:12:59 crc kubenswrapper[5115]: I0120 09:12:59.342910 5115 ???:1] "http: TLS handshake error from 192.168.126.11:55676: no serving certificate available for the kubelet" Jan 20 09:12:59 crc kubenswrapper[5115]: I0120 09:12:59.449393 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 20 09:13:00 crc kubenswrapper[5115]: I0120 09:13:00.575268 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 20 09:13:01 crc kubenswrapper[5115]: I0120 09:13:01.399585 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 20 09:13:01 crc kubenswrapper[5115]: I0120 09:13:01.674870 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.462555 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.707481 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.867357 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 20 09:13:03 crc kubenswrapper[5115]: I0120 09:13:03.888312 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 20 09:13:05 crc kubenswrapper[5115]: I0120 09:13:05.926576 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 20 09:13:07 crc kubenswrapper[5115]: I0120 09:13:07.570587 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.483417 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.483481 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.483532 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.484064 5115 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586"} pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:13:08 crc kubenswrapper[5115]: I0120 09:13:08.484119 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" containerID="cri-o://95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586" gracePeriod=600 Jan 20 09:13:09 crc kubenswrapper[5115]: I0120 09:13:09.215508 5115 generic.go:358] "Generic (PLEG): container finished" podID="dc89765b-3b00-4f86-ae67-a5088c182918" containerID="95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586" exitCode=0 Jan 20 09:13:09 crc kubenswrapper[5115]: I0120 09:13:09.215612 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerDied","Data":"95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586"} Jan 20 09:13:09 crc kubenswrapper[5115]: I0120 09:13:09.216422 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29"} Jan 20 09:13:10 crc kubenswrapper[5115]: I0120 09:13:10.379049 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:13:10 crc kubenswrapper[5115]: I0120 09:13:10.379063 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:13:13 crc kubenswrapper[5115]: I0120 09:13:13.629595 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 20 09:13:27 crc kubenswrapper[5115]: I0120 09:13:27.372049 5115 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.058172 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.058996 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" containerID="cri-o://487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d" gracePeriod=30 Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.395265 5115 generic.go:358] "Generic (PLEG): container finished" podID="008b7b41-90a9-4871-a024-a4a8736d5239" containerID="487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d" exitCode=0 Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.395411 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerDied","Data":"487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d"} Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.572317 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.637058 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r"] Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.638011 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.638041 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.638171 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" containerName="route-controller-manager" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.642854 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.648658 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r"] Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.671535 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.671601 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.671714 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673050 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca" (OuterVolumeSpecName: "client-ca") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673123 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673165 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") pod \"008b7b41-90a9-4871-a024-a4a8736d5239\" (UID: \"008b7b41-90a9-4871-a024-a4a8736d5239\") " Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673600 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.673870 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp" (OuterVolumeSpecName: "tmp") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.674640 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config" (OuterVolumeSpecName: "config") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.681136 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42" (OuterVolumeSpecName: "kube-api-access-qsn42") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "kube-api-access-qsn42". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.681514 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "008b7b41-90a9-4871-a024-a4a8736d5239" (UID: "008b7b41-90a9-4871-a024-a4a8736d5239"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.774867 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzpt\" (UniqueName: \"kubernetes.io/projected/6f410e5c-783d-4416-890a-e2290c4e3505-kube-api-access-dwzpt\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.774970 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-client-ca\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775196 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f410e5c-783d-4416-890a-e2290c4e3505-serving-cert\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775347 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f410e5c-783d-4416-890a-e2290c4e3505-tmp\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775438 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-config\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775633 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qsn42\" (UniqueName: \"kubernetes.io/projected/008b7b41-90a9-4871-a024-a4a8736d5239-kube-api-access-qsn42\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775661 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/008b7b41-90a9-4871-a024-a4a8736d5239-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775671 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/008b7b41-90a9-4871-a024-a4a8736d5239-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.775682 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/008b7b41-90a9-4871-a024-a4a8736d5239-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877115 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f410e5c-783d-4416-890a-e2290c4e3505-tmp\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877178 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-config\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877238 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzpt\" (UniqueName: \"kubernetes.io/projected/6f410e5c-783d-4416-890a-e2290c4e3505-kube-api-access-dwzpt\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877289 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-client-ca\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.877338 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f410e5c-783d-4416-890a-e2290c4e3505-serving-cert\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.878157 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6f410e5c-783d-4416-890a-e2290c4e3505-tmp\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.878879 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-client-ca\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.878934 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f410e5c-783d-4416-890a-e2290c4e3505-config\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.884725 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f410e5c-783d-4416-890a-e2290c4e3505-serving-cert\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.894156 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzpt\" (UniqueName: \"kubernetes.io/projected/6f410e5c-783d-4416-890a-e2290c4e3505-kube-api-access-dwzpt\") pod \"route-controller-manager-6cd84fb898-7pw9r\" (UID: \"6f410e5c-783d-4416-890a-e2290c4e3505\") " pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:36 crc kubenswrapper[5115]: I0120 09:13:36.967160 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.394068 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r"] Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.404286 5115 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.414452 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.414450 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr" event={"ID":"008b7b41-90a9-4871-a024-a4a8736d5239","Type":"ContainerDied","Data":"f0da1915c436f35830eb682754d71a8751f31073762c17f3edc3efcf56bdbf51"} Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.414668 5115 scope.go:117] "RemoveContainer" containerID="487dc612f01a3ebd6902165463db1ae797ab9f3c8a5b5da1d24c0a8ff2e2b31d" Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.424068 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" event={"ID":"6f410e5c-783d-4416-890a-e2290c4e3505","Type":"ContainerStarted","Data":"11e018dcaaf20f96c4bd4c428aa42ac2297f804e1ff02a43d5e968fdc1f8730e"} Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.460186 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:13:37 crc kubenswrapper[5115]: I0120 09:13:37.469146 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd648b944-86lpr"] Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.225486 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008b7b41-90a9-4871-a024-a4a8736d5239" path="/var/lib/kubelet/pods/008b7b41-90a9-4871-a024-a4a8736d5239/volumes" Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.435419 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" event={"ID":"6f410e5c-783d-4416-890a-e2290c4e3505","Type":"ContainerStarted","Data":"73bf3ff3a8dd27e05a7843dd71052334ace1849d2e63752f441a16d140483dba"} Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.436561 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.445543 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" Jan 20 09:13:38 crc kubenswrapper[5115]: I0120 09:13:38.460435 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cd84fb898-7pw9r" podStartSLOduration=2.460415454 podStartE2EDuration="2.460415454s" podCreationTimestamp="2026-01-20 09:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:13:38.458120972 +0000 UTC m=+328.626899552" watchObservedRunningTime="2026-01-20 09:13:38.460415454 +0000 UTC m=+328.629193984" Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.343831 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.344495 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" containerID="cri-o://3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583" gracePeriod=30 Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.545825 5115 generic.go:358] "Generic (PLEG): container finished" podID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerID="3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583" exitCode=0 Jan 20 09:13:55 crc kubenswrapper[5115]: I0120 09:13:55.545929 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerDied","Data":"3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583"} Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.328543 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.362301 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-vvczq"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.362963 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.362978 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.363265 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" containerName="controller-manager" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.369665 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.381251 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-vvczq"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.443923 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444021 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444107 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444132 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444585 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp" (OuterVolumeSpecName: "tmp") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.444809 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445142 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") pod \"16e383ce-519b-41ba-8dda-d0d71e14316e\" (UID: \"16e383ce-519b-41ba-8dda-d0d71e14316e\") " Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445514 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca" (OuterVolumeSpecName: "client-ca") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445518 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config" (OuterVolumeSpecName: "config") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.445960 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/149d552c-3752-4b8b-9802-83d80439f19c-tmp\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446212 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftff9\" (UniqueName: \"kubernetes.io/projected/149d552c-3752-4b8b-9802-83d80439f19c-kube-api-access-ftff9\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446246 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446459 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-client-ca\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.446792 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-config\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447014 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447324 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149d552c-3752-4b8b-9802-83d80439f19c-serving-cert\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447536 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16e383ce-519b-41ba-8dda-d0d71e14316e-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447693 5115 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-config\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.447824 5115 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.448014 5115 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16e383ce-519b-41ba-8dda-d0d71e14316e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.454773 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.458102 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42" (OuterVolumeSpecName: "kube-api-access-q5j42") pod "16e383ce-519b-41ba-8dda-d0d71e14316e" (UID: "16e383ce-519b-41ba-8dda-d0d71e14316e"). InnerVolumeSpecName "kube-api-access-q5j42". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.550971 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149d552c-3752-4b8b-9802-83d80439f19c-serving-cert\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.551966 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/149d552c-3752-4b8b-9802-83d80439f19c-tmp\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552025 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftff9\" (UniqueName: \"kubernetes.io/projected/149d552c-3752-4b8b-9802-83d80439f19c-kube-api-access-ftff9\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552094 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-client-ca\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552142 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-config\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552342 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552483 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5j42\" (UniqueName: \"kubernetes.io/projected/16e383ce-519b-41ba-8dda-d0d71e14316e-kube-api-access-q5j42\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.552516 5115 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e383ce-519b-41ba-8dda-d0d71e14316e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.553686 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-config\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.553878 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/149d552c-3752-4b8b-9802-83d80439f19c-tmp\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.554350 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-proxy-ca-bundles\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.556266 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" event={"ID":"16e383ce-519b-41ba-8dda-d0d71e14316e","Type":"ContainerDied","Data":"c218df20d2deac261a17d41c4da30b290a798b6508dd424db9f657d86d094615"} Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.556327 5115 scope.go:117] "RemoveContainer" containerID="3cb0e56fdde9f8c458e1b54cda0c342be87c577e32707c597207b4b4f034a583" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.556424 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5498596948-x8xdh" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.557047 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/149d552c-3752-4b8b-9802-83d80439f19c-client-ca\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.580120 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149d552c-3752-4b8b-9802-83d80439f19c-serving-cert\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.584168 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftff9\" (UniqueName: \"kubernetes.io/projected/149d552c-3752-4b8b-9802-83d80439f19c-kube-api-access-ftff9\") pod \"controller-manager-64f6849bcb-vvczq\" (UID: \"149d552c-3752-4b8b-9802-83d80439f19c\") " pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.636655 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.639758 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5498596948-x8xdh"] Jan 20 09:13:56 crc kubenswrapper[5115]: I0120 09:13:56.687097 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.109398 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f6849bcb-vvczq"] Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.564787 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" event={"ID":"149d552c-3752-4b8b-9802-83d80439f19c","Type":"ContainerStarted","Data":"7400aecff61d1c18fd4f4f9e6c8d1231c82954e46b428fbe93c4d4bf520b0aa9"} Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.564842 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.564858 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" event={"ID":"149d552c-3752-4b8b-9802-83d80439f19c","Type":"ContainerStarted","Data":"d013ac858bd01f79cf9e34a4e4f968a81f1d3e43f533b329f176b509fb2ca5b8"} Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.585333 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" podStartSLOduration=2.585314701 podStartE2EDuration="2.585314701s" podCreationTimestamp="2026-01-20 09:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:13:57.584292925 +0000 UTC m=+347.753071455" watchObservedRunningTime="2026-01-20 09:13:57.585314701 +0000 UTC m=+347.754093231" Jan 20 09:13:57 crc kubenswrapper[5115]: I0120 09:13:57.936310 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64f6849bcb-vvczq" Jan 20 09:13:58 crc kubenswrapper[5115]: I0120 09:13:58.225684 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e383ce-519b-41ba-8dda-d0d71e14316e" path="/var/lib/kubelet/pods/16e383ce-519b-41ba-8dda-d0d71e14316e/volumes" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.114321 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.115820 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mrnvw" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" containerID="cri-o://12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.120467 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.120756 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2dlnj" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" containerID="cri-o://c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.126947 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.127245 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" containerID="cri-o://fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.140454 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.141121 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5plkc" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" containerID="cri-o://094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.153994 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dh9gg"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.161094 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.161531 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-45pv6" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" containerID="cri-o://3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf" gracePeriod=30 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.161245 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.174142 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dh9gg"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.331797 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjwp\" (UniqueName: \"kubernetes.io/projected/b75152d4-1e91-4c11-8979-87d8e0ef68a5-kube-api-access-wqjwp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.332237 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.332286 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.332320 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75152d4-1e91-4c11-8979-87d8e0ef68a5-tmp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433694 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqjwp\" (UniqueName: \"kubernetes.io/projected/b75152d4-1e91-4c11-8979-87d8e0ef68a5-kube-api-access-wqjwp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433750 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433800 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.433819 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75152d4-1e91-4c11-8979-87d8e0ef68a5-tmp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.434505 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75152d4-1e91-4c11-8979-87d8e0ef68a5-tmp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.435127 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.439765 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75152d4-1e91-4c11-8979-87d8e0ef68a5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.452033 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqjwp\" (UniqueName: \"kubernetes.io/projected/b75152d4-1e91-4c11-8979-87d8e0ef68a5-kube-api-access-wqjwp\") pod \"marketplace-operator-547dbd544d-dh9gg\" (UID: \"b75152d4-1e91-4c11-8979-87d8e0ef68a5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.479093 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.517139 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.559423 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") pod \"e388c4ad-0d02-4736-b503-a96f7478edb4\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.559595 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") pod \"e388c4ad-0d02-4736-b503-a96f7478edb4\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.559734 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") pod \"e388c4ad-0d02-4736-b503-a96f7478edb4\" (UID: \"e388c4ad-0d02-4736-b503-a96f7478edb4\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.564813 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426" (OuterVolumeSpecName: "kube-api-access-25426") pod "e388c4ad-0d02-4736-b503-a96f7478edb4" (UID: "e388c4ad-0d02-4736-b503-a96f7478edb4"). InnerVolumeSpecName "kube-api-access-25426". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.565692 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities" (OuterVolumeSpecName: "utilities") pod "e388c4ad-0d02-4736-b503-a96f7478edb4" (UID: "e388c4ad-0d02-4736-b503-a96f7478edb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.605091 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e388c4ad-0d02-4736-b503-a96f7478edb4" (UID: "e388c4ad-0d02-4736-b503-a96f7478edb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.660842 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25426\" (UniqueName: \"kubernetes.io/projected/e388c4ad-0d02-4736-b503-a96f7478edb4-kube-api-access-25426\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.660881 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.660908 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e388c4ad-0d02-4736-b503-a96f7478edb4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.671384 5115 generic.go:358] "Generic (PLEG): container finished" podID="57355d9d-a14f-4cf0-8a63-842b27765063" containerID="3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.671477 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.673837 5115 generic.go:358] "Generic (PLEG): container finished" podID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerID="fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.673915 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerDied","Data":"fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.673936 5115 scope.go:117] "RemoveContainer" containerID="875b2918867b6e3f78a8dae2adc4f181e4875284a8cd56fc5c6d213e75261ea2" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.681305 5115 generic.go:358] "Generic (PLEG): container finished" podID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerID="c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.681764 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686169 5115 generic.go:358] "Generic (PLEG): container finished" podID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686391 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686440 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrnvw" event={"ID":"e388c4ad-0d02-4736-b503-a96f7478edb4","Type":"ContainerDied","Data":"ba3c29f3ff3951d423c587bfc54fde3036fb68c70ae8bcabcb0199b3d1a764a2"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.686541 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrnvw" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.688714 5115 generic.go:358] "Generic (PLEG): container finished" podID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerID="094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc" exitCode=0 Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.688935 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc"} Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.689751 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.717005 5115 scope.go:117] "RemoveContainer" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.729836 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.733677 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mrnvw"] Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.743225 5115 scope.go:117] "RemoveContainer" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.767456 5115 scope.go:117] "RemoveContainer" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786079 5115 scope.go:117] "RemoveContainer" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" Jan 20 09:14:09 crc kubenswrapper[5115]: E0120 09:14:09.786498 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4\": container with ID starting with 12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4 not found: ID does not exist" containerID="12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786530 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4"} err="failed to get container status \"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4\": rpc error: code = NotFound desc = could not find container \"12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4\": container with ID starting with 12bacbbfbfe9faaa1e7cb579c3b31cef9d5d216f92866ee82cd59e4a269034a4 not found: ID does not exist" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786556 5115 scope.go:117] "RemoveContainer" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" Jan 20 09:14:09 crc kubenswrapper[5115]: E0120 09:14:09.786946 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4\": container with ID starting with 9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4 not found: ID does not exist" containerID="9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786977 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4"} err="failed to get container status \"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4\": rpc error: code = NotFound desc = could not find container \"9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4\": container with ID starting with 9c95486a14862e504cd21f4a5c67708671af72d9da1f61dfdf84b84b34aa1ed4 not found: ID does not exist" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.786994 5115 scope.go:117] "RemoveContainer" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" Jan 20 09:14:09 crc kubenswrapper[5115]: E0120 09:14:09.787217 5115 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77\": container with ID starting with 641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77 not found: ID does not exist" containerID="641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.787242 5115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77"} err="failed to get container status \"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77\": rpc error: code = NotFound desc = could not find container \"641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77\": container with ID starting with 641a2305dbc76735572c7584f2d8452f84f02582dbd2624bbe12d1f145836a77 not found: ID does not exist" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.802445 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.813851 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.826714 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.862601 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") pod \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.862883 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") pod \"57355d9d-a14f-4cf0-8a63-842b27765063\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863048 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") pod \"f9d4e242-d348-4f3f-8453-612b19e41f3a\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863168 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") pod \"f9d4e242-d348-4f3f-8453-612b19e41f3a\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863294 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863482 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863620 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") pod \"f9d4e242-d348-4f3f-8453-612b19e41f3a\" (UID: \"f9d4e242-d348-4f3f-8453-612b19e41f3a\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863741 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.863862 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") pod \"3984fc5a-413e-46e1-94ab-3c230891fe87\" (UID: \"3984fc5a-413e-46e1-94ab-3c230891fe87\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864031 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") pod \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864159 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") pod \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\" (UID: \"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864313 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") pod \"57355d9d-a14f-4cf0-8a63-842b27765063\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.864614 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") pod \"57355d9d-a14f-4cf0-8a63-842b27765063\" (UID: \"57355d9d-a14f-4cf0-8a63-842b27765063\") " Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.867446 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868036 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp" (OuterVolumeSpecName: "tmp") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868074 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities" (OuterVolumeSpecName: "utilities") pod "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" (UID: "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868257 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities" (OuterVolumeSpecName: "utilities") pod "f9d4e242-d348-4f3f-8453-612b19e41f3a" (UID: "f9d4e242-d348-4f3f-8453-612b19e41f3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.868600 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities" (OuterVolumeSpecName: "utilities") pod "57355d9d-a14f-4cf0-8a63-842b27765063" (UID: "57355d9d-a14f-4cf0-8a63-842b27765063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.870176 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm" (OuterVolumeSpecName: "kube-api-access-4msjm") pod "57355d9d-a14f-4cf0-8a63-842b27765063" (UID: "57355d9d-a14f-4cf0-8a63-842b27765063"). InnerVolumeSpecName "kube-api-access-4msjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.883453 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7" (OuterVolumeSpecName: "kube-api-access-x6cc7") pod "f9d4e242-d348-4f3f-8453-612b19e41f3a" (UID: "f9d4e242-d348-4f3f-8453-612b19e41f3a"). InnerVolumeSpecName "kube-api-access-x6cc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.883468 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv" (OuterVolumeSpecName: "kube-api-access-xwzdv") pod "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" (UID: "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c"). InnerVolumeSpecName "kube-api-access-xwzdv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.883681 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.887432 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9d4e242-d348-4f3f-8453-612b19e41f3a" (UID: "f9d4e242-d348-4f3f-8453-612b19e41f3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.888400 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv" (OuterVolumeSpecName: "kube-api-access-l6hvv") pod "3984fc5a-413e-46e1-94ab-3c230891fe87" (UID: "3984fc5a-413e-46e1-94ab-3c230891fe87"). InnerVolumeSpecName "kube-api-access-l6hvv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.919600 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" (UID: "1d51d284-ea4b-4e3f-95bd-de28c5df1f3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965675 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4msjm\" (UniqueName: \"kubernetes.io/projected/57355d9d-a14f-4cf0-8a63-842b27765063-kube-api-access-4msjm\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965700 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965708 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965716 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965724 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6cc7\" (UniqueName: \"kubernetes.io/projected/f9d4e242-d348-4f3f-8453-612b19e41f3a-kube-api-access-x6cc7\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965732 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l6hvv\" (UniqueName: \"kubernetes.io/projected/3984fc5a-413e-46e1-94ab-3c230891fe87-kube-api-access-l6hvv\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965741 5115 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3984fc5a-413e-46e1-94ab-3c230891fe87-tmp\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965749 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9d4e242-d348-4f3f-8453-612b19e41f3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965756 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965764 5115 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3984fc5a-413e-46e1-94ab-3c230891fe87-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965773 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwzdv\" (UniqueName: \"kubernetes.io/projected/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-kube-api-access-xwzdv\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.965781 5115 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:09 crc kubenswrapper[5115]: I0120 09:14:09.973355 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57355d9d-a14f-4cf0-8a63-842b27765063" (UID: "57355d9d-a14f-4cf0-8a63-842b27765063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.025776 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dh9gg"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.069122 5115 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57355d9d-a14f-4cf0-8a63-842b27765063-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.226123 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" path="/var/lib/kubelet/pods/e388c4ad-0d02-4736-b503-a96f7478edb4/volumes" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.695288 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-45pv6" event={"ID":"57355d9d-a14f-4cf0-8a63-842b27765063","Type":"ContainerDied","Data":"2a29832ffd9412a21621468b6591cb9a7196b1735133523a4d5919937f22f017"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.695679 5115 scope.go:117] "RemoveContainer" containerID="3b2695392662c24c56f1422eadae97e754a2f16833a327817bd2b7835887f6bf" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.695343 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-45pv6" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.697792 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" event={"ID":"3984fc5a-413e-46e1-94ab-3c230891fe87","Type":"ContainerDied","Data":"ba9e935cd9dbcccba3373b56114fb5112e6bd4ddbcf850c03f77ef25fb786214"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.697847 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9gfdh" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.700055 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dlnj" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.700070 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dlnj" event={"ID":"1d51d284-ea4b-4e3f-95bd-de28c5df1f3c","Type":"ContainerDied","Data":"b623557fb8fa89838a7fffcb0c7e471eeaf77057e10e543a3504832324b27404"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.703065 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" event={"ID":"b75152d4-1e91-4c11-8979-87d8e0ef68a5","Type":"ContainerStarted","Data":"f91f012d8d51da192c2cb70d076583a067b4976c8cd68a303b4e31a65ccfbe92"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.703108 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" event={"ID":"b75152d4-1e91-4c11-8979-87d8e0ef68a5","Type":"ContainerStarted","Data":"67a4a7f9483a6190b221913a94005d524b343bc29b1ea84d548cf0fd3b574ebf"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.703687 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.708726 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5plkc" event={"ID":"f9d4e242-d348-4f3f-8453-612b19e41f3a","Type":"ContainerDied","Data":"50d3c0e76b095c21c4ac1a5beba7290e74c3ffa7941936c22e8017974e850944"} Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.708880 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5plkc" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.711531 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.713954 5115 scope.go:117] "RemoveContainer" containerID="1c7349b861fcc3cdec3f5eaa960ebb43329afec1ce06d636fabc17f9cb7e20c8" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.723505 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-dh9gg" podStartSLOduration=1.723489861 podStartE2EDuration="1.723489861s" podCreationTimestamp="2026-01-20 09:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:14:10.721857207 +0000 UTC m=+360.890635737" watchObservedRunningTime="2026-01-20 09:14:10.723489861 +0000 UTC m=+360.892268391" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.736342 5115 scope.go:117] "RemoveContainer" containerID="09806ac667b8436fffdd10a05c009eff6bb4282dd93406b629566c95167bc9ea" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.743267 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.748918 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9gfdh"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.753849 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.759340 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2dlnj"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.762984 5115 scope.go:117] "RemoveContainer" containerID="fbcae2a717246018256a95dc1f3b2f061bf042569074a110a6a284fcd803f2bb" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.779120 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.784659 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-45pv6"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.790614 5115 scope.go:117] "RemoveContainer" containerID="c06f862960c9bdfaf0ac5b708c347681a6defb95c62d2ffbb57bb0f49aff19dc" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.812944 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.814321 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5plkc"] Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.830838 5115 scope.go:117] "RemoveContainer" containerID="a33dfb9140b05712014768cf8b01acc9283196096d0f87e1b764f33c91c5086f" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.868970 5115 scope.go:117] "RemoveContainer" containerID="06668f7c92efbf93f8c0b42e46d251a0aadb5b80b4c08ce779cc27955ee5a124" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.889977 5115 scope.go:117] "RemoveContainer" containerID="094fa074aa44e27d111ea636cfa5e177561853a33b91fef37dd4590007b099fc" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.911095 5115 scope.go:117] "RemoveContainer" containerID="74b5178a1b534ac941dea2392034f3b3ec2731f44ad8c1e9849d9151b8564a9d" Jan 20 09:14:10 crc kubenswrapper[5115]: I0120 09:14:10.925655 5115 scope.go:117] "RemoveContainer" containerID="292ea7ef1a462b0b3647f2424736d354073f39a37c563e3f2ffad608521d16f7" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.731313 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fz98h"] Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732482 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732520 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732541 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732554 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732571 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732583 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732612 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732624 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732639 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732650 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732670 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732681 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732727 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732741 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732757 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732769 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732788 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732800 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732815 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732826 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732841 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732852 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732889 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732930 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="extract-utilities" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732945 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732956 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732969 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.732980 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="extract-content" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733141 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733173 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733191 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" containerName="marketplace-operator" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733216 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="e388c4ad-0d02-4736-b503-a96f7478edb4" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733234 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.733257 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" containerName="registry-server" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.773239 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz98h"] Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.773379 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.783253 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.792173 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-utilities\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.792295 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksj9j\" (UniqueName: \"kubernetes.io/projected/aad987c3-e453-432f-8c54-3c7a336446f9-kube-api-access-ksj9j\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.792441 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-catalog-content\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.893734 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-catalog-content\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.893885 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-utilities\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.893975 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksj9j\" (UniqueName: \"kubernetes.io/projected/aad987c3-e453-432f-8c54-3c7a336446f9-kube-api-access-ksj9j\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.894620 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-utilities\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.895110 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aad987c3-e453-432f-8c54-3c7a336446f9-catalog-content\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:11 crc kubenswrapper[5115]: I0120 09:14:11.920973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksj9j\" (UniqueName: \"kubernetes.io/projected/aad987c3-e453-432f-8c54-3c7a336446f9-kube-api-access-ksj9j\") pod \"redhat-operators-fz98h\" (UID: \"aad987c3-e453-432f-8c54-3c7a336446f9\") " pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.104128 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.233869 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d51d284-ea4b-4e3f-95bd-de28c5df1f3c" path="/var/lib/kubelet/pods/1d51d284-ea4b-4e3f-95bd-de28c5df1f3c/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.235142 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3984fc5a-413e-46e1-94ab-3c230891fe87" path="/var/lib/kubelet/pods/3984fc5a-413e-46e1-94ab-3c230891fe87/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.235968 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57355d9d-a14f-4cf0-8a63-842b27765063" path="/var/lib/kubelet/pods/57355d9d-a14f-4cf0-8a63-842b27765063/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.237498 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d4e242-d348-4f3f-8453-612b19e41f3a" path="/var/lib/kubelet/pods/f9d4e242-d348-4f3f-8453-612b19e41f3a/volumes" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.570636 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz98h"] Jan 20 09:14:12 crc kubenswrapper[5115]: W0120 09:14:12.579441 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaad987c3_e453_432f_8c54_3c7a336446f9.slice/crio-af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7 WatchSource:0}: Error finding container af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7: Status 404 returned error can't find the container with id af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7 Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.725155 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ckvv"] Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.730140 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.735972 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ckvv"] Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.738621 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.750908 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerStarted","Data":"af6da638e09de5359de1f528b19de846e2618df5088fe16aa3907b3b0399afc7"} Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.803340 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-utilities\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.803622 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkm9\" (UniqueName: \"kubernetes.io/projected/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-kube-api-access-jhkm9\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.803718 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-catalog-content\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905349 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-catalog-content\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905709 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-utilities\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905913 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkm9\" (UniqueName: \"kubernetes.io/projected/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-kube-api-access-jhkm9\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.905992 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-catalog-content\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.906342 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-utilities\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:12 crc kubenswrapper[5115]: I0120 09:14:12.929407 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkm9\" (UniqueName: \"kubernetes.io/projected/9a5b59fd-dfe1-4370-8768-28c4a001c9e3-kube-api-access-jhkm9\") pod \"community-operators-9ckvv\" (UID: \"9a5b59fd-dfe1-4370-8768-28c4a001c9e3\") " pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.118962 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.532713 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ckvv"] Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.769047 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"5efa34340cba2c121798cf78c6f46b08114ceff4e45cd2c65994e420cad7dc49"} Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.769111 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"bda71f569251559a5dda6b1fd60e8b4e4deca6ef39824928c7f9a95fcce2a666"} Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.773461 5115 generic.go:358] "Generic (PLEG): container finished" podID="aad987c3-e453-432f-8c54-3c7a336446f9" containerID="5af5cfd237071b03c8f1cb8f38c284b6d8474e8eadda0d6f831afb21a4c3a022" exitCode=0 Jan 20 09:14:13 crc kubenswrapper[5115]: I0120 09:14:13.773550 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerDied","Data":"5af5cfd237071b03c8f1cb8f38c284b6d8474e8eadda0d6f831afb21a4c3a022"} Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.084153 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wg9m7"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.090498 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.093306 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wg9m7"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.141361 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wbbcl"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.146062 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbbcl"] Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.146209 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.149022 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238789 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238829 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b6fa77-d9ae-4530-8ee7-9c67130972e0-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238852 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238885 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmtg\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-kube-api-access-stmtg\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238952 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-trusted-ca\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.238981 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-tls\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.239003 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-certificates\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.239071 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b6fa77-d9ae-4530-8ee7-9c67130972e0-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.271652 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.340673 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-tls\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.340758 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-certificates\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.340997 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b6fa77-d9ae-4530-8ee7-9c67130972e0-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341147 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6s2r\" (UniqueName: \"kubernetes.io/projected/cdf226cf-7ac3-4329-a01c-54a92f0189f8-kube-api-access-q6s2r\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341195 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-utilities\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341235 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-catalog-content\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341296 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b6fa77-d9ae-4530-8ee7-9c67130972e0-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341335 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341475 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-stmtg\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-kube-api-access-stmtg\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.341544 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-trusted-ca\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.342616 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-certificates\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.342879 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b6fa77-d9ae-4530-8ee7-9c67130972e0-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.345842 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b6fa77-d9ae-4530-8ee7-9c67130972e0-trusted-ca\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.351015 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-registry-tls\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.352369 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b6fa77-d9ae-4530-8ee7-9c67130972e0-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.363723 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-stmtg\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-kube-api-access-stmtg\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.365987 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b6fa77-d9ae-4530-8ee7-9c67130972e0-bound-sa-token\") pod \"image-registry-5d9d95bf5b-wg9m7\" (UID: \"68b6fa77-d9ae-4530-8ee7-9c67130972e0\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.443575 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q6s2r\" (UniqueName: \"kubernetes.io/projected/cdf226cf-7ac3-4329-a01c-54a92f0189f8-kube-api-access-q6s2r\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444199 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-utilities\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444381 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-catalog-content\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444754 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-utilities\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.444810 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdf226cf-7ac3-4329-a01c-54a92f0189f8-catalog-content\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.466250 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6s2r\" (UniqueName: \"kubernetes.io/projected/cdf226cf-7ac3-4329-a01c-54a92f0189f8-kube-api-access-q6s2r\") pod \"certified-operators-wbbcl\" (UID: \"cdf226cf-7ac3-4329-a01c-54a92f0189f8\") " pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.469065 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.477569 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.787758 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerStarted","Data":"bb0a260d90a078ed905468f6aea6e5b913c206257bfaabaacf96b2aa5f7abc05"} Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.789968 5115 generic.go:358] "Generic (PLEG): container finished" podID="9a5b59fd-dfe1-4370-8768-28c4a001c9e3" containerID="5efa34340cba2c121798cf78c6f46b08114ceff4e45cd2c65994e420cad7dc49" exitCode=0 Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.790029 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerDied","Data":"5efa34340cba2c121798cf78c6f46b08114ceff4e45cd2c65994e420cad7dc49"} Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.882784 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-wg9m7"] Jan 20 09:14:14 crc kubenswrapper[5115]: W0120 09:14:14.892768 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b6fa77_d9ae_4530_8ee7_9c67130972e0.slice/crio-f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7 WatchSource:0}: Error finding container f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7: Status 404 returned error can't find the container with id f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7 Jan 20 09:14:14 crc kubenswrapper[5115]: I0120 09:14:14.978304 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wbbcl"] Jan 20 09:14:15 crc kubenswrapper[5115]: W0120 09:14:15.001758 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdf226cf_7ac3_4329_a01c_54a92f0189f8.slice/crio-7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0 WatchSource:0}: Error finding container 7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0: Status 404 returned error can't find the container with id 7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0 Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.525110 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vl5h2"] Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.529844 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.533702 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.535780 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl5h2"] Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.663139 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-utilities\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.663441 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-catalog-content\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.663490 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjdx\" (UniqueName: \"kubernetes.io/projected/3ea34b88-772f-448a-ba98-33a5deda3740-kube-api-access-7vjdx\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.764603 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-catalog-content\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.764646 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjdx\" (UniqueName: \"kubernetes.io/projected/3ea34b88-772f-448a-ba98-33a5deda3740-kube-api-access-7vjdx\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.764730 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-utilities\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.765141 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-utilities\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.765169 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea34b88-772f-448a-ba98-33a5deda3740-catalog-content\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.796255 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjdx\" (UniqueName: \"kubernetes.io/projected/3ea34b88-772f-448a-ba98-33a5deda3740-kube-api-access-7vjdx\") pod \"redhat-marketplace-vl5h2\" (UID: \"3ea34b88-772f-448a-ba98-33a5deda3740\") " pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.797420 5115 generic.go:358] "Generic (PLEG): container finished" podID="aad987c3-e453-432f-8c54-3c7a336446f9" containerID="bb0a260d90a078ed905468f6aea6e5b913c206257bfaabaacf96b2aa5f7abc05" exitCode=0 Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.797518 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerDied","Data":"bb0a260d90a078ed905468f6aea6e5b913c206257bfaabaacf96b2aa5f7abc05"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.799427 5115 generic.go:358] "Generic (PLEG): container finished" podID="cdf226cf-7ac3-4329-a01c-54a92f0189f8" containerID="af44ae2deaab214d2fa993da72d2f6a6652798315b1ab10ffc25a3206614468d" exitCode=0 Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.799600 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerDied","Data":"af44ae2deaab214d2fa993da72d2f6a6652798315b1ab10ffc25a3206614468d"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.799686 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerStarted","Data":"7056cdf8a934a4a064c2fff93131a23a634074032ebcc14e65a3ae8b7d9efee0"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.803949 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"863a773a9f1ae2a215a65e4189697107176edf4f24a2efb98e697fb757149aae"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.805565 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" event={"ID":"68b6fa77-d9ae-4530-8ee7-9c67130972e0","Type":"ContainerStarted","Data":"5a458c0a79818bd65bde3fefe7db8a798ff478182650ae0a031ef73983042e68"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.805601 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" event={"ID":"68b6fa77-d9ae-4530-8ee7-9c67130972e0","Type":"ContainerStarted","Data":"f6326eba7ad5b435563510f8e90a89487e14ca172b6a8e7f824ea835cc7325e7"} Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.811924 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.862554 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" podStartSLOduration=1.862436271 podStartE2EDuration="1.862436271s" podCreationTimestamp="2026-01-20 09:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:14:15.862292708 +0000 UTC m=+366.031071238" watchObservedRunningTime="2026-01-20 09:14:15.862436271 +0000 UTC m=+366.031214841" Jan 20 09:14:15 crc kubenswrapper[5115]: I0120 09:14:15.863615 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.279335 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl5h2"] Jan 20 09:14:16 crc kubenswrapper[5115]: W0120 09:14:16.296811 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ea34b88_772f_448a_ba98_33a5deda3740.slice/crio-40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900 WatchSource:0}: Error finding container 40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900: Status 404 returned error can't find the container with id 40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900 Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.815721 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz98h" event={"ID":"aad987c3-e453-432f-8c54-3c7a336446f9","Type":"ContainerStarted","Data":"32439bae146c329d1d7d55b6eb0230034190d0ac960506954672ab7530573ab6"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.820121 5115 generic.go:358] "Generic (PLEG): container finished" podID="9a5b59fd-dfe1-4370-8768-28c4a001c9e3" containerID="863a773a9f1ae2a215a65e4189697107176edf4f24a2efb98e697fb757149aae" exitCode=0 Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.820215 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerDied","Data":"863a773a9f1ae2a215a65e4189697107176edf4f24a2efb98e697fb757149aae"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.824756 5115 generic.go:358] "Generic (PLEG): container finished" podID="3ea34b88-772f-448a-ba98-33a5deda3740" containerID="4ce0d5d6ba15819cf5c63b70361421a9ac213971329750f111f55c3a49b6e8f7" exitCode=0 Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.824812 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerDied","Data":"4ce0d5d6ba15819cf5c63b70361421a9ac213971329750f111f55c3a49b6e8f7"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.825029 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerStarted","Data":"40872df91a96dca0f60abb2cc208add9d0fb45faa884c89a2cf5376512c4c900"} Jan 20 09:14:16 crc kubenswrapper[5115]: I0120 09:14:16.832102 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fz98h" podStartSLOduration=5.056787576 podStartE2EDuration="5.832085716s" podCreationTimestamp="2026-01-20 09:14:11 +0000 UTC" firstStartedPulling="2026-01-20 09:14:13.775109681 +0000 UTC m=+363.943888211" lastFinishedPulling="2026-01-20 09:14:14.550407821 +0000 UTC m=+364.719186351" observedRunningTime="2026-01-20 09:14:16.83001666 +0000 UTC m=+366.998795190" watchObservedRunningTime="2026-01-20 09:14:16.832085716 +0000 UTC m=+367.000864246" Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.842119 5115 generic.go:358] "Generic (PLEG): container finished" podID="cdf226cf-7ac3-4329-a01c-54a92f0189f8" containerID="ae6a2033d223f73e842bc54ca33772fbb34c935004ebe6d3c3590cb8b32d00b8" exitCode=0 Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.842236 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerDied","Data":"ae6a2033d223f73e842bc54ca33772fbb34c935004ebe6d3c3590cb8b32d00b8"} Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.845970 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ckvv" event={"ID":"9a5b59fd-dfe1-4370-8768-28c4a001c9e3","Type":"ContainerStarted","Data":"39d1c11030ba364157213f20de2ba3fe0ea5ee65763dfb9f969e7f1e088bf790"} Jan 20 09:14:17 crc kubenswrapper[5115]: I0120 09:14:17.875991 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ckvv" podStartSLOduration=5.176124837 podStartE2EDuration="5.875969567s" podCreationTimestamp="2026-01-20 09:14:12 +0000 UTC" firstStartedPulling="2026-01-20 09:14:14.790906373 +0000 UTC m=+364.959684903" lastFinishedPulling="2026-01-20 09:14:15.490751093 +0000 UTC m=+365.659529633" observedRunningTime="2026-01-20 09:14:17.872974216 +0000 UTC m=+368.041752776" watchObservedRunningTime="2026-01-20 09:14:17.875969567 +0000 UTC m=+368.044748107" Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.853946 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wbbcl" event={"ID":"cdf226cf-7ac3-4329-a01c-54a92f0189f8","Type":"ContainerStarted","Data":"871d5162bbab49621e927f92365411bb769bd2349ca0295ff75fd24aad381f56"} Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.856115 5115 generic.go:358] "Generic (PLEG): container finished" podID="3ea34b88-772f-448a-ba98-33a5deda3740" containerID="bc0c9d83ba57010823d073ae4e064475414173b2e3a7dec40dc8810f5a7485f8" exitCode=0 Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.856242 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerDied","Data":"bc0c9d83ba57010823d073ae4e064475414173b2e3a7dec40dc8810f5a7485f8"} Jan 20 09:14:18 crc kubenswrapper[5115]: I0120 09:14:18.872359 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wbbcl" podStartSLOduration=3.594420485 podStartE2EDuration="4.872341394s" podCreationTimestamp="2026-01-20 09:14:14 +0000 UTC" firstStartedPulling="2026-01-20 09:14:15.800084505 +0000 UTC m=+365.968863035" lastFinishedPulling="2026-01-20 09:14:17.078005414 +0000 UTC m=+367.246783944" observedRunningTime="2026-01-20 09:14:18.870977387 +0000 UTC m=+369.039755927" watchObservedRunningTime="2026-01-20 09:14:18.872341394 +0000 UTC m=+369.041119924" Jan 20 09:14:19 crc kubenswrapper[5115]: I0120 09:14:19.863128 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl5h2" event={"ID":"3ea34b88-772f-448a-ba98-33a5deda3740","Type":"ContainerStarted","Data":"277f00fb916af87255d45dbd71d4e12a5c6d49416d37aafc1b909e1ea277f2ad"} Jan 20 09:14:19 crc kubenswrapper[5115]: I0120 09:14:19.879784 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vl5h2" podStartSLOduration=3.466517403 podStartE2EDuration="4.879765539s" podCreationTimestamp="2026-01-20 09:14:15 +0000 UTC" firstStartedPulling="2026-01-20 09:14:16.82594797 +0000 UTC m=+366.994726540" lastFinishedPulling="2026-01-20 09:14:18.239196156 +0000 UTC m=+368.407974676" observedRunningTime="2026-01-20 09:14:19.878618818 +0000 UTC m=+370.047397368" watchObservedRunningTime="2026-01-20 09:14:19.879765539 +0000 UTC m=+370.048544089" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.104857 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.105314 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.145331 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:22 crc kubenswrapper[5115]: I0120 09:14:22.943333 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fz98h" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.120114 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.120349 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.157476 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:23 crc kubenswrapper[5115]: I0120 09:14:23.942149 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ckvv" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.478220 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.478406 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.522160 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:24 crc kubenswrapper[5115]: I0120 09:14:24.941380 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wbbcl" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.864047 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.864352 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.920974 5115 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:25 crc kubenswrapper[5115]: I0120 09:14:25.969154 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vl5h2" Jan 20 09:14:36 crc kubenswrapper[5115]: I0120 09:14:36.831196 5115 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-wg9m7" Jan 20 09:14:36 crc kubenswrapper[5115]: I0120 09:14:36.886334 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.204462 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7"] Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.227112 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7"] Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.227550 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.229826 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.230930 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.334674 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.334812 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.334888 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.437119 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.437210 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.437256 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.438461 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.446973 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.468945 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"collect-profiles-29481675-n6rz7\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.550428 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:00 crc kubenswrapper[5115]: I0120 09:15:00.965628 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7"] Jan 20 09:15:00 crc kubenswrapper[5115]: W0120 09:15:00.975910 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbec78294_76de_4a69_ba13_bf1bc31bd32f.slice/crio-f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984 WatchSource:0}: Error finding container f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984: Status 404 returned error can't find the container with id f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984 Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.154353 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerStarted","Data":"2aab00a96e1f4cba3cc540f81abaf3c112e5e9d11f36adba6aa69acd02843a55"} Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.154404 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerStarted","Data":"f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984"} Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.171637 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" podStartSLOduration=1.171616868 podStartE2EDuration="1.171616868s" podCreationTimestamp="2026-01-20 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 09:15:01.171069224 +0000 UTC m=+411.339847764" watchObservedRunningTime="2026-01-20 09:15:01.171616868 +0000 UTC m=+411.340395398" Jan 20 09:15:01 crc kubenswrapper[5115]: I0120 09:15:01.962496 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-b674j" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" containerID="cri-o://658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca" gracePeriod=30 Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.161147 5115 generic.go:358] "Generic (PLEG): container finished" podID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerID="658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca" exitCode=0 Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.161266 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerDied","Data":"658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca"} Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.163090 5115 generic.go:358] "Generic (PLEG): container finished" podID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerID="2aab00a96e1f4cba3cc540f81abaf3c112e5e9d11f36adba6aa69acd02843a55" exitCode=0 Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.163289 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerDied","Data":"2aab00a96e1f4cba3cc540f81abaf3c112e5e9d11f36adba6aa69acd02843a55"} Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.379068 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563359 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563455 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563933 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.563999 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564051 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564279 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564473 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.564663 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") pod \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\" (UID: \"580c8ecd-e9bb-4c33-aeb2-f304adb8119c\") " Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.565383 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.567085 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.577184 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.577519 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb" (OuterVolumeSpecName: "kube-api-access-v7mcb") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "kube-api-access-v7mcb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.578554 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.579202 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.580507 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.589612 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "580c8ecd-e9bb-4c33-aeb2-f304adb8119c" (UID: "580c8ecd-e9bb-4c33-aeb2-f304adb8119c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666768 5115 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666809 5115 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666821 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7mcb\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-kube-api-access-v7mcb\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666839 5115 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666851 5115 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666862 5115 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:02 crc kubenswrapper[5115]: I0120 09:15:02.666873 5115 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/580c8ecd-e9bb-4c33-aeb2-f304adb8119c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.194156 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-b674j" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.194234 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-b674j" event={"ID":"580c8ecd-e9bb-4c33-aeb2-f304adb8119c","Type":"ContainerDied","Data":"d053f0589af44bf1ec4966f80948e0266381b97821c76787bebafd985060d717"} Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.194598 5115 scope.go:117] "RemoveContainer" containerID="658aaa1c341101e06f75ed771bab4ffef1039984a8c36f1f22e7f660d9e832ca" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.230993 5115 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.231043 5115 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-b674j"] Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.448354 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.490308 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") pod \"bec78294-76de-4a69-ba13-bf1bc31bd32f\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.490411 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") pod \"bec78294-76de-4a69-ba13-bf1bc31bd32f\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.490477 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") pod \"bec78294-76de-4a69-ba13-bf1bc31bd32f\" (UID: \"bec78294-76de-4a69-ba13-bf1bc31bd32f\") " Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.491296 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume" (OuterVolumeSpecName: "config-volume") pod "bec78294-76de-4a69-ba13-bf1bc31bd32f" (UID: "bec78294-76de-4a69-ba13-bf1bc31bd32f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.496128 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bec78294-76de-4a69-ba13-bf1bc31bd32f" (UID: "bec78294-76de-4a69-ba13-bf1bc31bd32f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.496243 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d" (OuterVolumeSpecName: "kube-api-access-wlz9d") pod "bec78294-76de-4a69-ba13-bf1bc31bd32f" (UID: "bec78294-76de-4a69-ba13-bf1bc31bd32f"). InnerVolumeSpecName "kube-api-access-wlz9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.592049 5115 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bec78294-76de-4a69-ba13-bf1bc31bd32f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.592081 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wlz9d\" (UniqueName: \"kubernetes.io/projected/bec78294-76de-4a69-ba13-bf1bc31bd32f-kube-api-access-wlz9d\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:03 crc kubenswrapper[5115]: I0120 09:15:03.592090 5115 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec78294-76de-4a69-ba13-bf1bc31bd32f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.204088 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.204111 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481675-n6rz7" event={"ID":"bec78294-76de-4a69-ba13-bf1bc31bd32f","Type":"ContainerDied","Data":"f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984"} Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.204171 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f67f0658f034cd9c1511e53723d81502d0b899c392214043c657cf0a16b1f984" Jan 20 09:15:04 crc kubenswrapper[5115]: I0120 09:15:04.230810 5115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" path="/var/lib/kubelet/pods/580c8ecd-e9bb-4c33-aeb2-f304adb8119c/volumes" Jan 20 09:15:08 crc kubenswrapper[5115]: I0120 09:15:08.483001 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:15:08 crc kubenswrapper[5115]: I0120 09:15:08.483921 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.554696 5115 scope.go:117] "RemoveContainer" containerID="6a65133584c92a02557ec7a68bc231cbf328c72b94121d393761fae9e77a43df" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.582368 5115 scope.go:117] "RemoveContainer" containerID="732f833d741db4f25185d597b6c55514eac6e2fefadb22332239b99e78faa12c" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.611606 5115 scope.go:117] "RemoveContainer" containerID="4459efcaad2c1e7ab6acad4f70731a19325a72c01d38b2f5c5ebb0e654c3e652" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.637065 5115 scope.go:117] "RemoveContainer" containerID="7bc7ce39ff7ab01bae0a1441c0086dd0bb588059f1c38dcf038a03d08f73e0f5" Jan 20 09:15:10 crc kubenswrapper[5115]: I0120 09:15:10.659041 5115 scope.go:117] "RemoveContainer" containerID="f042b661a3072f2466176ad58de653a1ac5fd34d0d1c9b846b833b88bded9006" Jan 20 09:15:38 crc kubenswrapper[5115]: I0120 09:15:38.483361 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:15:38 crc kubenswrapper[5115]: I0120 09:15:38.484145 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.139430 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29481676-t6krr"] Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140552 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140566 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140577 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerName="collect-profiles" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140582 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerName="collect-profiles" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140681 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="580c8ecd-e9bb-4c33-aeb2-f304adb8119c" containerName="registry" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.140699 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="bec78294-76de-4a69-ba13-bf1bc31bd32f" containerName="collect-profiles" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.162218 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.164994 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481676-t6krr"] Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.167732 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-7txkl\"" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.168071 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.168473 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.311323 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"auto-csr-approver-29481676-t6krr\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.413389 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"auto-csr-approver-29481676-t6krr\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.447008 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"auto-csr-approver-29481676-t6krr\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.500000 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:00 crc kubenswrapper[5115]: I0120 09:16:00.986677 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481676-t6krr"] Jan 20 09:16:01 crc kubenswrapper[5115]: I0120 09:16:01.665028 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerStarted","Data":"b9b2bc15b0761e31fb15f9e9d3ee8d3c4b0d8b925fa461a7081a9831a8a2dd97"} Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.692797 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerStarted","Data":"c04240c9c88a0e670c1ddaaec72be9e2f9060795f59d8b40a12f489449b36d51"} Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.717243 5115 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29481676-t6krr" podStartSLOduration=1.6373720189999998 podStartE2EDuration="5.717212012s" podCreationTimestamp="2026-01-20 09:16:00 +0000 UTC" firstStartedPulling="2026-01-20 09:16:00.998208174 +0000 UTC m=+471.166986714" lastFinishedPulling="2026-01-20 09:16:05.078048177 +0000 UTC m=+475.246826707" observedRunningTime="2026-01-20 09:16:05.71005731 +0000 UTC m=+475.878835890" watchObservedRunningTime="2026-01-20 09:16:05.717212012 +0000 UTC m=+475.885990572" Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.755626 5115 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-d8t2v" Jan 20 09:16:05 crc kubenswrapper[5115]: I0120 09:16:05.793141 5115 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-d8t2v" Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.703142 5115 generic.go:358] "Generic (PLEG): container finished" podID="c6a29366-7d58-427e-a357-043043b83881" containerID="c04240c9c88a0e670c1ddaaec72be9e2f9060795f59d8b40a12f489449b36d51" exitCode=0 Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.703338 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerDied","Data":"c04240c9c88a0e670c1ddaaec72be9e2f9060795f59d8b40a12f489449b36d51"} Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.794470 5115 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-19 09:11:05 +0000 UTC" deadline="2026-02-14 00:31:48.277195595 +0000 UTC" Jan 20 09:16:06 crc kubenswrapper[5115]: I0120 09:16:06.794520 5115 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="591h15m41.482680488s" Jan 20 09:16:07 crc kubenswrapper[5115]: I0120 09:16:07.795208 5115 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-19 09:11:05 +0000 UTC" deadline="2026-02-10 11:40:13.537305609 +0000 UTC" Jan 20 09:16:07 crc kubenswrapper[5115]: I0120 09:16:07.795706 5115 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="506h24m5.741608834s" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.002163 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.039852 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") pod \"c6a29366-7d58-427e-a357-043043b83881\" (UID: \"c6a29366-7d58-427e-a357-043043b83881\") " Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.046398 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk" (OuterVolumeSpecName: "kube-api-access-vfqsk") pod "c6a29366-7d58-427e-a357-043043b83881" (UID: "c6a29366-7d58-427e-a357-043043b83881"). InnerVolumeSpecName "kube-api-access-vfqsk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.140792 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfqsk\" (UniqueName: \"kubernetes.io/projected/c6a29366-7d58-427e-a357-043043b83881-kube-api-access-vfqsk\") on node \"crc\" DevicePath \"\"" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.482663 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.482715 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.482753 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.483215 5115 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29"} pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.483262 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" containerID="cri-o://91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29" gracePeriod=600 Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.716024 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481676-t6krr" event={"ID":"c6a29366-7d58-427e-a357-043043b83881","Type":"ContainerDied","Data":"b9b2bc15b0761e31fb15f9e9d3ee8d3c4b0d8b925fa461a7081a9831a8a2dd97"} Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.716382 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b2bc15b0761e31fb15f9e9d3ee8d3c4b0d8b925fa461a7081a9831a8a2dd97" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.716480 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481676-t6krr" Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.718828 5115 generic.go:358] "Generic (PLEG): container finished" podID="dc89765b-3b00-4f86-ae67-a5088c182918" containerID="91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29" exitCode=0 Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.718876 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerDied","Data":"91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29"} Jan 20 09:16:08 crc kubenswrapper[5115]: I0120 09:16:08.718996 5115 scope.go:117] "RemoveContainer" containerID="95c07e0438f206b88563e2b39a6250eb2706530b4f1d2646ed4348287befe586" Jan 20 09:16:09 crc kubenswrapper[5115]: I0120 09:16:09.727741 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0"} Jan 20 09:16:10 crc kubenswrapper[5115]: I0120 09:16:10.818530 5115 scope.go:117] "RemoveContainer" containerID="cd35bfe818999fb69f754d3ef537d63114d8766c9a55fd8c1f055b4598993e53" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.156635 5115 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29481678-rk846"] Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.158346 5115 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c6a29366-7d58-427e-a357-043043b83881" containerName="oc" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.158376 5115 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a29366-7d58-427e-a357-043043b83881" containerName="oc" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.158529 5115 memory_manager.go:356] "RemoveStaleState removing state" podUID="c6a29366-7d58-427e-a357-043043b83881" containerName="oc" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.181391 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481678-rk846"] Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.181544 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.184512 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.184618 5115 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-7txkl\"" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.185425 5115 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.276560 5115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"auto-csr-approver-29481678-rk846\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.378691 5115 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"auto-csr-approver-29481678-rk846\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.414034 5115 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"auto-csr-approver-29481678-rk846\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.513120 5115 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:00 crc kubenswrapper[5115]: I0120 09:18:00.808075 5115 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29481678-rk846"] Jan 20 09:18:00 crc kubenswrapper[5115]: W0120 09:18:00.813605 5115 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd4b257_185a_4876_9eeb_4d69084bad68.slice/crio-2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe WatchSource:0}: Error finding container 2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe: Status 404 returned error can't find the container with id 2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe Jan 20 09:18:01 crc kubenswrapper[5115]: I0120 09:18:01.525286 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481678-rk846" event={"ID":"3bd4b257-185a-4876-9eeb-4d69084bad68","Type":"ContainerStarted","Data":"2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe"} Jan 20 09:18:03 crc kubenswrapper[5115]: I0120 09:18:03.541732 5115 generic.go:358] "Generic (PLEG): container finished" podID="3bd4b257-185a-4876-9eeb-4d69084bad68" containerID="aaee7dc4a03126bb7351fc6e6855c258363d9583e2c9910d8ea9adb20ddc6909" exitCode=0 Jan 20 09:18:03 crc kubenswrapper[5115]: I0120 09:18:03.541808 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481678-rk846" event={"ID":"3bd4b257-185a-4876-9eeb-4d69084bad68","Type":"ContainerDied","Data":"aaee7dc4a03126bb7351fc6e6855c258363d9583e2c9910d8ea9adb20ddc6909"} Jan 20 09:18:04 crc kubenswrapper[5115]: I0120 09:18:04.806613 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:04 crc kubenswrapper[5115]: I0120 09:18:04.957218 5115 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") pod \"3bd4b257-185a-4876-9eeb-4d69084bad68\" (UID: \"3bd4b257-185a-4876-9eeb-4d69084bad68\") " Jan 20 09:18:04 crc kubenswrapper[5115]: I0120 09:18:04.964018 5115 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj" (OuterVolumeSpecName: "kube-api-access-wvjnj") pod "3bd4b257-185a-4876-9eeb-4d69084bad68" (UID: "3bd4b257-185a-4876-9eeb-4d69084bad68"). InnerVolumeSpecName "kube-api-access-wvjnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.059051 5115 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wvjnj\" (UniqueName: \"kubernetes.io/projected/3bd4b257-185a-4876-9eeb-4d69084bad68-kube-api-access-wvjnj\") on node \"crc\" DevicePath \"\"" Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.554703 5115 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29481678-rk846" Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.554761 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29481678-rk846" event={"ID":"3bd4b257-185a-4876-9eeb-4d69084bad68","Type":"ContainerDied","Data":"2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe"} Jan 20 09:18:05 crc kubenswrapper[5115]: I0120 09:18:05.554791 5115 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fbd04cabed3e38333366ca27582242011f1e6a4fdc3114f0e91d6c35b249bbe" Jan 20 09:18:08 crc kubenswrapper[5115]: I0120 09:18:08.483267 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:18:08 crc kubenswrapper[5115]: I0120 09:18:08.483741 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:18:10 crc kubenswrapper[5115]: I0120 09:18:10.460083 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:18:10 crc kubenswrapper[5115]: I0120 09:18:10.461802 5115 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 20 09:18:38 crc kubenswrapper[5115]: I0120 09:18:38.483137 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:18:38 crc kubenswrapper[5115]: I0120 09:18:38.483818 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.483656 5115 patch_prober.go:28] interesting pod/machine-config-daemon-zvfcd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.484746 5115 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.484844 5115 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.486074 5115 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0"} pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.486187 5115 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" podUID="dc89765b-3b00-4f86-ae67-a5088c182918" containerName="machine-config-daemon" containerID="cri-o://318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0" gracePeriod=600 Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.631923 5115 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.984888 5115 generic.go:358] "Generic (PLEG): container finished" podID="dc89765b-3b00-4f86-ae67-a5088c182918" containerID="318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0" exitCode=0 Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.984948 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerDied","Data":"318a92888a4aefb646ef70769ebe07ba2549fcaa74b80c7afec657d563a87cf0"} Jan 20 09:19:08 crc kubenswrapper[5115]: I0120 09:19:08.985036 5115 scope.go:117] "RemoveContainer" containerID="91dc8479398c4ca8a212adb6ee5aaefb3869b82e5fade77dc4b295c2c867eb29" Jan 20 09:19:09 crc kubenswrapper[5115]: I0120 09:19:09.995519 5115 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zvfcd" event={"ID":"dc89765b-3b00-4f86-ae67-a5088c182918","Type":"ContainerStarted","Data":"3c8d58d8b9258defba8eb8fcd56ea4a754ea8ca5ded8c883cc93464635be9331"}